diff --git a/docs/core-concepts/raven.md b/docs/core-concepts/raven.md index 02edbf1e63d..c1770ebbd65 100644 --- a/docs/core-concepts/raven.md +++ b/docs/core-concepts/raven.md @@ -63,13 +63,5 @@ Note: The Raven Controller Manger required by the Raven component is refactured | v0.3.0 | openyurt/raven-agent:v0.3.0 | 2023.1 | feature | Support node IP forwarding | | v0.4.0 | openyurt/raven-agent:0.4.0 | 2023.11 | feature | Support raven l7 proxy | | v0.4.1 | openyurt/raven-agent:0.4.1 | 2024.3 | feature | Support raven l3 NAT traverse | -## 5. future plan - -- Support SLB as public network exporter for gateway 【[issue #22](https://github.com/openyurtio/raven/issues/22)】 -- Support NAT traversal 【[issue #13](https://github.com/openyurtio/raven/issues/13)】 -- Support distribute route path decision 【[issue #14](https://github.com/openyurtio/raven/issues/14)】 - - route path cost evaluation - - shortest path decision - - keep networking connection alive during paths change - -Welcome interested students to join us and contribute code!! +| v0.4.2 | openyurt/raven-agent:0.4.2 | 2024.4 | feature | Support compatibility with iptables-nft | +| v0.4.3 | openyurt/raven-agent:0.4.3 | 2024.12 | feature | primarily focuses on stability improvements and bug fixes to enhance overall network reliability. | \ No newline at end of file diff --git a/docs/developer-manuals/how-to-build-and-test.md b/docs/developer-manuals/how-to-build-and-test.md index 5a1b7a505bb..0b2555a4611 100644 --- a/docs/developer-manuals/how-to-build-and-test.md +++ b/docs/developer-manuals/how-to-build-and-test.md @@ -2,16 +2,6 @@ title: How to Build and Test --- -In [OpenYurt repository](https://github.com/openyurtio/openyurt), currently`(v0.7.0, commit: 68a18ee)` 7 components are contained, including: - -1. yurthub -2. yurt-controller-manager -3. yurt-tunnel-server -4. yurt-tunnel-agent -5. yurtctl -6. yurtadm -7. yurt-node-servant - This article will give you an introduction of how to build and test the code after development of above components. ## How to build @@ -34,7 +24,7 @@ This command compiles yurtadm based on the OS and architecture of the local host make docker-build TARGET_PLATFORMS="${TARGET_PLATFORMS}" REGION="${your_region}" ``` -`TARGET_PLATFORMS`: indicates the OS and architecture to which the component will run. Currently, Linux/amd64, Linux/arm, and Linux/arm64 are supported. +`TARGET_PLATFORMS`: indicates the OS and architecture to which the component will run. Currently, linux/amd64, linux/arm, and linux/arm64 are supported. `REGION`: This parameter affects the GOPROXY used during compilation. Users in China are advised to set `REGION=cn` to ensure proper construction (cn indicates `GOPROXY=https://goproxy.cn`). diff --git a/docs/developer-manuals/local-up-openyurt.md b/docs/developer-manuals/local-up-openyurt.md index e62ab71797f..908ef18a707 100644 --- a/docs/developer-manuals/local-up-openyurt.md +++ b/docs/developer-manuals/local-up-openyurt.md @@ -2,6 +2,22 @@ title: Local Up OpenYurt --- +## Environment preparation + +Before you run the local cluster scripts, set up your environment as follows. + +1. **Install Go** — Download and install a supported Go toolchain from [go.dev/dl](https://go.dev/dl/), then confirm with `go version`. + +2. **Install Docker** — Install Docker Engine and ensure the Docker daemon is running. See [Get Docker](https://docs.docker.com/get-docker/). + +3. **Add `GOPATH/bin` to `PATH`** — Binaries installed with `go install` (for example `kind`) are written to `$(go env GOPATH)/bin`. Add that directory to your `PATH` so your shell can find them. + +```bash +export PATH="$(go env GOPATH)/bin:$PATH" +``` + +Reload the shell configuration or open a new terminal, then verify with `echo "$PATH"` or by running a tool you installed with `go install`. + ## How to use If you don't have the openyurt cluster, you can run the bash shell [`local-up-openyurt.sh`](https://github.com/openyurtio/openyurt/blob/master/hack/make-rules/local-up-openyurt.sh) to quickly set up the openyurt cluster at your local host. @@ -13,6 +29,9 @@ make docker-build-and-up-openyurt # startup a OpenYurt cluster based on prepared images make local-up-openyurt + +# run e2e test +make e2e-tests ``` Then you can use `kubectl` to interact with your OpenYurt cluster. @@ -23,12 +42,12 @@ Then you can use `kubectl` to interact with your OpenYurt cluster. In summary, the `local-up-openyurt.sh` will use the local files under the openyurt work path to set up the cluster. And you can specify the behavior of the shell through setting environment variables. -It will use `kind` to set up the kubernetes cluster. You can set `KUBERNETESVERSION` to specify the kubernetes version to use. For instance, `export KUBERNETESVERSION=1.23` before running the shell will enable you to use kubernetes v1.23. In addition, you can set `NODES_NUM` to specify the number of nodes the cluster will contain. +It will use `kind` to set up the kubernetes cluster. You can set `KUBERNETESVERSION` to specify the kubernetes version to use. For instance, `export KUBERNETESVERSION=1.34` before running the shell will enable you to use kubernetes v1.23. In addition, you can set `NODES_NUM` to specify the number of nodes the cluster will contain. >Note: ->1. The format of `KUBERNETESVERSION` is `1.xx`, other formats will not be accepted. The default KUBERNETESVERSION is `1.22`. +>1. The format of `KUBERNETESVERSION` is `1.xx`, other formats will not be accepted. The default KUBERNETESVERSION is `1.34`. >2. `NODES_NUM` should not be less than 2. Finally, the cluster will contains one control-plane node and `NODES_NUM-1` woker nodes. The default `NODES_NUM` is 2. -At last, Finally, OpenYurt components will be installed in kubernetes cluster, including 'Yurthub', 'Yurt-Controller-Manager', 'Yurt-tunnel-Agent' and 'Yurt-Tunnel-Server'. +At last, Finally, OpenYurt components will be installed in kubernetes cluster, including 'Yurthub', 'Yurt-Manager', 'Raven'. By now, you've got the OpenYurt cluster at your local host and you can interact with it using `kubectl`. `kind` will automatically stored the kubeconfig at your `KUBECONFIG` path (default path is `${HOME}/.kube/config)`. If you already have the `KUBECONFIG` to interact with other clusters, `kind` will add a new context of openyurt cluster into the `KUBECONFIG` and automatically switch to it. You can manually switch back to the previous context using command `kubectl config use-context ${PREVIOUS_CONTEXT_NAME}`. For more details, you can see the [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). You can store the kubeconfig at another path through setting `KIND_KUBECONFIG`. @@ -46,8 +65,30 @@ NODES_NUM represents the number of nodes to set up in the new-created cluster. T 3. **KUBERNETESVERSION** -KUBERNETESVERSION declares the kubernetes version the cluster will use. The format is "1.XX". Now from 1.17 to 1.23 are supported. The default value is 1.22. +KUBERNETESVERSION declares the kubernetes version the cluster will use. The format is "1.XX". Now from 1.32 and 1.34 are supported. The default value is 1.34. 4. **TIMEOUT** -TIMEOUT represents the time to wait for the kind control-plane, yurt-tunnel-server and yurt-tunnel-agent to be ready. If they are not ready after the duration, the shell will exit. The default value is 120s. +TIMEOUT represents the time to wait for the kind control-plane, Yurt-Manager to be ready. If they are not ready after the duration, the shell will exit. The default value is 120s. + +## Tips + +When you start an OpenYurt cluster (for example with `local-up-openyurt`), tune the following `inotify`-related kernel parameters on the host. Raising these limits helps avoid watch-queue exhaustion when many files are monitored (for example by container runtimes or development tools). + +Recommended values: + +| Parameter | Value | +|-----------|-------| +| `fs.inotify.max_user_watches` | 524288 | +| `fs.inotify.max_user_instances` | 2048 | +| `fs.inotify.max_queued_events` | 524288 | + +To apply until reboot: + +```bash +sudo sysctl -w fs.inotify.max_user_watches=524288 +sudo sysctl -w fs.inotify.max_user_instances=2048 +sudo sysctl -w fs.inotify.max_queued_events=524288 +``` + +To persist across reboots, add the same `key=value` lines under `/etc/sysctl.d/` (for example `/etc/sysctl.d/99-openyurt-local.conf`) and run `sudo sysctl --system`. diff --git a/docs/introduction.md b/docs/introduction.md index d56a79ade41..caf350aa02d 100644 --- a/docs/introduction.md +++ b/docs/introduction.md @@ -63,14 +63,7 @@ Currently, the framework supports five filters within its chain, with the flexib ![resource-access-control](../static/img/docs/introduction/data-filtering-framework.png) -**6. Cloud-edge network bandwidth reduction** - -A [performance test](https://openyurt.io/docs/test-report/yurthub-performance-test#traffic) has shown that in a large-scale OpenYurt cluster, the cloud-edge traffic will increase rapidly if pods are deleted and recreated since the kube-proxy components on the edge nodes watch for all endpoints/endpointslices changes. It's worth mentioning that identical endpoints data is transmitted to edge nodes within the same nodepool, which may not be the most efficient approach. This is due to the fact that cloud-edge networking traffic often relies on public networks, which can incur higher costs. -Leveraging the Yurt-Coordinator mentioned above, OpenYurt proposes to introduce a notion of pool-scoped metadata which are unique within a nodepool such as the endpoints/endpointslices data. As described in below Figure, the leader Yurthub will read the pool-scoped data from the cloud kube-apiserver and update the load to yurt-coordinator. As a result, all other YurtHubs will retrieve the pool-scoped data from the yurt-coordinator, eliminating the use of public network bandwidth for retrieving such data from the cloud kube-apiserver. - -![bandwidth-reduction](../static/img/docs/introduction/bandwidth-reduction.png) - -**7. Cloud-native edge device management** +**6. Cloud-native edge device management** OpenYurt defines a set of APIs for managing edge devices through cloud Kubernetes controlplane. The APIs abstract the device’s basic properties, main capabilities and the data that should be transmitted between the cloud and the edge. OpenYurt provides integration with mainstream OSS IoT device management solutions, such as EdgeXFoundry using the APIs. As described in below Figure, An instance of YurtIoTDock component and EdgeXFoundry service are deployed in each nodepool. YurtIoTDock component can get the changes of Device CRD from cloud kube-apiserver and convert the desired spec of Device CRD to requests of EdgeXFoundry, then transmit the requests to EdgeXFoundry service in real-time. On the other hand, YurtIoTDock can subscribe to the device status from EdgeXFoundry service, and update the status of Device CRD when status is changed. diff --git a/docs/user-manuals/node-management/join-a-node.md b/docs/user-manuals/node-management/join-a-node.md index 8ab7fa9151c..cdd594f4564 100644 --- a/docs/user-manuals/node-management/join-a-node.md +++ b/docs/user-manuals/node-management/join-a-node.md @@ -57,116 +57,4 @@ yurtadm join :6443 --token= --node-type=cloud --discov `yurtadm join` will automatically handle component installation. - **Kubernetes Components**: It installs `kubelet`, `kubeadm`, etc. You can pre-place these binaries in your `$PATH` to use a specific version, but `yurtadm` will verify that their major and minor versions match the cluster's Kubernetes version. -- **CNI Binaries**: The join process pulls specially modified CNI binaries (e.g., for Flannel) to suit edge environments. If you have prepared your own CNI binaries, place them under `/opt/cni/bin` and use the `--reuse-cni-bin=true` flag with the `yurtadm join` command. - - - -## 2. Converting an Existing Kubernetes Node - -This method is for nodes that are already part of a standard Kubernetes cluster and you want to convert them into OpenYurt edge or cloud nodes. This involves labeling the node and manually setting up YurtHub. - -### 2.1 Label and Annotate the Node - -First, identify the node as either an `edge` or `cloud` node using a label. - -**Labeling an edge node:** -```bash -kubectl label node openyurt.io/is-edge-worker=true -``` -> For a cloud node, set the label value to `false`. - -**Enabling node autonomy (Optional):** -To prevent pods from being evicted when an edge node loses connection to the control plane, add the `autonomy-duration` annotation. The node.openyurt.io/autonomy-duration annotation will map to the tolerationSeconds field in the Pod, a value of 0 indicates that Pods will never be evicted. The duration format can be found [here](https://pkg.go.dev/maze.io/x/duration#ParseDuration). -```bash -kubectl annotate node node.openyurt.io/autonomy-duration=0 -``` - -**Adding the node to a NodePool (Optional):** -To leverage OpenYurt's unitization capabilities, you can assign the node to a `NodePool`. -```bash -# First, create a NodePool if it doesn't exist -cat < apps.openyurt.io/desired-nodepool=worker -``` - -### 2.2 Setup YurtHub - -YurtHub is a critical component that acts as a proxy between the `kubelet` and the API server. It is typically deployed as a static pod. - -1. **Prepare the YurtHub manifest:** - - Get a bootstrap token and the API server's address. - - Use these values to populate a YurtHub manifest template (e.g., [`config/setup/yurthub.yaml`](https://github.com/openyurtio/openyurt/blob/master/config/setup/yurthub.yaml)). - - ```bash - # Replace placeholders and copy the manifest to the target node\'s manifests directory - cat config/setup/yurthub.yaml | - sed 's|__kubernetes_master_address__||; - s|__bootstrap_token__||' > /tmp/yurthub.yaml - scp /tmp/yurthub.yaml root@:/etc/kubernetes/manifests/ - ``` - -### 2.3 Reconfigure Kubelet - -Next, reconfigure the `kubelet` on the node to communicate through YurtHub instead of directly with the API server. - -1. **Create a new kubeconfig for kubelet:** - This kubeconfig points `kubelet` to the local YurtHub instance (`http://127.0.0.1:10261`). - - ```bash - # Run these commands on the target node - mkdir -p /var/lib/openyurt - cat << EOF > /var/lib/openyurt/kubelet.conf - apiVersion: v1 - clusters: - - cluster: - server: http://127.0.0.1:10261 - name: default-cluster - contexts: - - context: - cluster: default-cluster - namespace: default - user: default-auth - name: default-context - current-context: default-context - kind: Config - preferences: {} - EOF - ``` - -2. **Update the kubelet service configuration:** - Modify the `kubelet`'s systemd drop-in file to use the new kubeconfig. The file path may vary depending on your OS (e.g., `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`). - ```bash - sed -i "s|KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=\/etc\/kubernetes\/bootstrap-kubelet.conf\ --kubeconfig=\/etc\/kubernetes\/kubelet.conf|KUBELET_KUBECONFIG_ARGS=--kubeconfig=\/var\/lib\/openyurt\/kubelet.conf|g" \ - /etc/systemd/system/kubelet.service.d/10-kubeadm.conf - ``` - -3. **Restart the kubelet service:** - ```bash - # Run on the target node - systemctl daemon-reload && systemctl restart kubelet - ``` - After restarting, verify the node returns to a `Ready` state using `kubectl get nodes`. - -### 2.4 Restart Pods on the Node - -Finally, to ensure all pods on the node communicate through YurtHub, they must be recreated. - -**Warning:** This operation will cause a brief service interruption. Confirm the impact on your production environment before proceeding. - -```bash -# 1. List all pods running on the converted node -kubectl get pod -A -o wide | grep - -# 2. Delete all user pods and system pods (except the yurthub pod) -# The Kubernetes controllers will automatically recreate them. -kubectl delete pod -n -``` \ No newline at end of file +- **CNI Binaries**: The join process pulls specially modified CNI binaries (e.g., for Flannel) to suit edge environments. If you have prepared your own CNI binaries, place them under `/opt/cni/bin` and use the `--reuse-cni-bin=true` flag with the `yurtadm join` command. \ No newline at end of file diff --git a/docs/user-manuals/node-pool-management/delete-a-node-pool.md b/docs/user-manuals/node-pool-management/delete-a-node-pool.md index d8f198d4995..42fb8152cb6 100644 --- a/docs/user-manuals/node-pool-management/delete-a-node-pool.md +++ b/docs/user-manuals/node-pool-management/delete-a-node-pool.md @@ -1,3 +1,64 @@ --- title: Delete a node pool ---- \ No newline at end of file +--- + +**The latest version of OpenYurt NodePool resource is `apps.openyurt.io/v1beta2`.** + +**Please refer to the latest [API Reference](../../reference/api_reference.md) for details.** + +## Constraints + +You **cannot** delete a NodePool while it still contains one or more nodes. The control plane rejects the delete request until no Node in the cluster is still a member of that pool. + +### Node pool label on nodes is immutable + +The binding label `apps.openyurt.io/nodepool` on a Node is **managed by OpenYurt**. After a node is associated with a pool, you **must not** rely on deleting or changing this label to move or “unbind” the node: + +- The label **cannot** be removed manually. +- The label **cannot** be modified manually (for example, to point the node at another pool). + +Trying to clear or edit it with `kubectl label` is not a supported way to empty a NodePool. The only supported way to reduce pool membership is for the corresponding **Node objects to leave the cluster** (see below). + +## Check whether the pool has nodes + +List nodes that belong to the pool by label: + +```shell +kubectl get nodes -l apps.openyurt.io/nodepool= +``` + +You can also inspect the NodePool status: + +```shell +kubectl get nodepool -o yaml +``` + +If `status.nodes` is non-empty, the pool still has members and cannot be deleted. + +## Before deleting the NodePool: remove nodes from the cluster + +To delete a NodePool, every node that currently belongs to that pool must **no longer exist as a Node** in the cluster. In practice, after you have drained workloads as appropriate, remove each such node from the API (for example `kubectl delete node `), following your operational procedure for decommissioning nodes. + +For a typical flow using `yurtadm`, see [Remove a node](../node-management/remove-a-node.md). + +After those Node objects are gone, confirm the pool has no members: + +```shell +kubectl get nodes -l apps.openyurt.io/nodepool= +``` + +The output should list no nodes. + +## Delete the NodePool + +When the pool has no nodes, delete the resource: + +```shell +kubectl delete nodepool +``` + +The short name `np` is also supported: + +```shell +kubectl delete np +``` diff --git a/docs/user-manuals/node-pool-management/edit-a-node-pool.md b/docs/user-manuals/node-pool-management/edit-a-node-pool.md index d0f55b99311..72dd331163e 100644 --- a/docs/user-manuals/node-pool-management/edit-a-node-pool.md +++ b/docs/user-manuals/node-pool-management/edit-a-node-pool.md @@ -1,3 +1,100 @@ --- title: Edit a node pool ---- \ No newline at end of file +--- + +**The latest version of OpenYurt NodePool resource is `apps.openyurt.io/v1beta2`.** + +**Please refer to the latest [API Reference](../../reference/api_reference.md) for details.** + +## Overview + +Editing a NodePool updates **supported** fields in its `spec` (and optionally `metadata` labels or annotations on the NodePool object itself). Some `spec` fields are immutable after creation; see [Fields that cannot be updated](#fields-that-cannot-be-updated). The nodepool controller reconciles allowed fields to **member nodes**: for example, labels, annotations, and taints defined in the NodePool `spec` are propagated to nodes that belong to the pool. + +This document describes how to change a NodePool with `kubectl`. + +## Fields that cannot be updated + +The following `spec` fields are **fixed at creation** and **must not** be changed afterward. The API server or admission control will reject updates that modify them: + +- `spec.type` (`Cloud` or `Edge`) +- `spec.hostNetwork` +- `spec.interConnectivity` + +If you need a different value for any of these, **create a new NodePool** with the desired settings and move nodes according to your operational procedure (see [Create a node pool](./create-a-node-pool.md) and [Join a node](../node-management/join-a-node.md)). + +## Before you begin + +- [Yurt-Manager](https://openyurt.io/docs/installation/manually-setup/) (which includes the nodepool controller) must be installed and running. +- You need RBAC permission to update `nodepools` in the `apps.openyurt.io` API group. + +## View the current NodePool + +```shell +kubectl get nodepool +kubectl get np -o yaml +``` + +The short name `np` is supported. + +## Edit a NodePool + +### Option 1: `kubectl edit` (interactive) + +Opens the live object in your editor. After you save and exit, the update is submitted to the API server. + +```shell +kubectl edit nodepool +``` + +If validation or admission rejects the change, the editor usually reopens with an error message. + +### Option 2: `kubectl apply` (declarative) + +Maintain a manifest file, change the fields you need, then apply: + +```shell +kubectl apply -f nodepool.yaml +``` + +Use the same `apiVersion`, `kind`, `metadata.name`, and `resourceVersion` behavior as for any Kubernetes object: avoid overwriting concurrent changes unintentionally when editing YAML by hand. + +### Option 3: `kubectl patch` (targeted updates) + +Useful for small or scripted changes. Example: add or replace a label in the node template: + +```shell +kubectl patch nodepool --type merge -p '{"spec":{"labels":{"apps.openyurt.io/example":"updated"}}}' +``` + +Do not use patch (or any update) to change the immutable fields in [Fields that cannot be updated](#fields-that-cannot-be-updated). + +## Common fields you can change + +Semantics of these fields are described in [Create a node pool](./create-a-node-pool.md). Typical updates include: + +| Area | Examples | +|------|----------| +| Node template | `spec.annotations`, `spec.labels`, `spec.taints` | +| Hub leader | `spec.enableLeaderElection`, `spec.leaderElectionStrategy`, `spec.leaderReplicas`, `spec.leaderNodeLabelSelector`, `spec.poolScopeMetadata` | + +After you update labels, annotations, or taints on the NodePool, allow time for reconciliation and verify on a member node if needed: + +```shell +kubectl get node -o yaml +``` + +## Limitations and related topics + +- **Immutable `spec` fields**: `spec.type`, `spec.hostNetwork`, and `spec.interConnectivity` cannot be changed after the NodePool is created. See [Fields that cannot be updated](#fields-that-cannot-be-updated). +- **Resource name**: You cannot rename a NodePool by editing `metadata.name`. To use a different pool name, create a new NodePool and follow your procedure for associating nodes (see [Join a node](../node-management/join-a-node.md)). +- **Node membership**: Which nodes belong to a pool is driven by node labels and controller behavior, not by arbitrary edits to the NodePool alone. The binding label `apps.openyurt.io/nodepool` on a Node is managed by OpenYurt; do not rely on manually changing it to move nodes. See [Delete a node pool](./delete-a-node-pool.md). +- **Removing a pool**: See [Delete a node pool](./delete-a-node-pool.md). + +## Verify after editing + +```shell +kubectl get nodepool -o yaml +kubectl get nodepools +``` + +For pools with leader election enabled, inspect `status.leaderEndpoints` and the extra columns from `kubectl get nodepools` as described in [Create a node pool](./create-a-node-pool.md). diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/developer-manuals/local-up-openyurt.md b/i18n/zh/docusaurus-plugin-content-docs/current/developer-manuals/local-up-openyurt.md index 5f0571819f5..a8e16842bf2 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/developer-manuals/local-up-openyurt.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/developer-manuals/local-up-openyurt.md @@ -2,16 +2,35 @@ title: 本地启动集群 --- +## 环境准备 + +在运行本地集群相关命令前,请先完成以下环境配置。 + +1. **安装 Go** — 从 [go.dev/dl](https://go.dev/dl/) 下载并安装合适版本的 Go,安装后执行 `go version` 确认。 + +2. **安装 Docker** — 安装 Docker Engine,确保 Docker 守护进程已启动。详见 [Get Docker](https://docs.docker.com/get-docker/)。 + +3. **将 `GOPATH/bin` 加入 `PATH`** — 通过 `go install` 安装的工具(例如 `kind`)会放在 `$(go env GOPATH)/bin` 下,需将该目录加入 `PATH`,否则无法在终端中直接执行。 + +```bash +export PATH="$(go env GOPATH)/bin:$PATH" +``` + +保存后执行 `source ~/.zshrc`(或对应配置文件)或重新打开终端,再通过 `echo "$PATH"` 或运行已安装的工具确认生效。 + ## 使用方法 -OpenYurt提供了一种在本地快速启动集群的方法,通过运行脚本[local-up-openyurt.sh](https://github.com/openyurtio/openyurt/blob/master/hack/make-rules/local-up-openyurt.sh),可以一键式在本地创建OpenYurt集群。该脚本正确完成后,可以直接通过kubectl命令访问集群。在运行前需要安装docker、kubectl、go和kind等依赖软件,以及`make docker-build`在本地准备好OpenYurt各组件镜像。使用方法如下: +OpenYurt提供了一种在本地快速启动集群的方法,通过运行脚本[local-up-openyurt.sh](https://github.com/openyurtio/openyurt/blob/master/hack/make-rules/local-up-openyurt.sh),可以一键式在本地创建OpenYurt集群。该脚本正确完成后,可以直接通过kubectl命令访问集群。除上述环境准备外,还需安装 kubectl 等依赖,以及按需通过 `make docker-build` 在本地准备好 OpenYurt 各组件镜像。使用方法如下: ```bash -# 先构建OpenYurt镜像,再启动OpenYurt集群 -make docker-build-and-up-openyurt +# 先构建OpenYurt镜像 +make docker-build -# 镜像已经构建完成,仅在本地启动OpenYurt集群 +#本地启动OpenYurt集群 make local-up-openyurt + +#运行e2e测试 +make e2e-tests ``` > 默认在linux/amd64平台运行,需要在mac环境运行时需要指定平台信息,执行命令为: `make local-up-openyurt TARGET_PLATFORMS=linux/arm64` @@ -20,12 +39,12 @@ make local-up-openyurt 总的来说,`local-up-openyurt.sh`会使用当前openyurt目录下的源文件启动OpenYurt集群。可以通过设置环境变量来控制脚本的行为。 -脚本会通过`kind`来启动一个kubernetes集群。可以通过设置`KUBERNETESVERSION`来指定集群的kubernetes的版本。如,通过运行`export KUBERNETESVERSION=1.23`可以指定使用1.23版本的kind镜像。还可以通过设置`NODES_NUM`来指定启动集群中包含节点的数量。 +脚本会通过`kind`来启动一个kubernetes集群。可以通过设置`KUBERNETESVERSION`来指定集群的kubernetes的版本。如,通过运行`export KUBERNETESVERSION=1.34`可以指定使用1.34版本的kind镜像。还可以通过设置`NODES_NUM`来指定启动集群中包含节点的数量。 >注意: ->1. `KUBERNETESVERSION`的格式只能是`1.xx`。默认值为`1.22`。 +>1. `KUBERNETESVERSION`的格式只能是`1.xx`。默认值为`1.34`。 >2. `NODES_NUM`的值不能小于2。启动的集群中最后会包含1个control-plane节点,`NODES_NUM-1`个worker节点。默认值为2。 -最后将在kubernetes集群中安装OpenYurt组件,包括`yurthub`,`yurt-controller-manager`,`yurt-tunnel-agent`和`yurt-tunnel-server`。 +最后将在kubernetes集群中安装OpenYurt组件,包括`yurthub`,`yurt-manager`,`raven`。 现在,一个OpenYurt集群就启动完成了。可以直接通过`kubectl`命令来与集群进行交互。`kind`会自动将启动的集群的kubeconfig储存在`KUBECONFIG`所指路径(默认为`${HOME}/.kube/config`)。如果在该目录下已经有了kubeconfig,`kind`会为该kubeconfig增加新的context,并切换current-context指向刚刚创建的集群。可以通过`kubectl config use-context ${PREVIOUS_CONTEXT_NAME}`命令切回原来的集群。context相关的更多信息可以参考该[文档](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters)。另外可以通过设置`KIND_KUBECONFIG`来指定其他的路径。 @@ -43,8 +62,30 @@ NODES_NUM表示创建的集群中包含的节点的数量,最后集群中会 3. **KUBERNETESVERSION** -KUBERNETESVERSION表示创建的集群使用的kubernetes版本,格式为"1.xx"。目前能指定的版本有1.17~1.23。默认值为`1.22`。 +KUBERNETESVERSION表示创建的集群使用的kubernetes版本,格式为"1.xx"。目前能指定的版本有1.32和~1.34。默认值为`1.34`。 4. **TIMEOUT** -TIMEOUT表示在创建集群期间等待组件(control-plane,yurt-tunnel-server,yurt-tunnel-agent)就绪的最长时间。默认值为`120s`。 \ No newline at end of file +TIMEOUT表示在创建集群期间等待组件(control-plane,yurt-manager)就绪的最长时间。默认值为`120s`。 + +## 提示 + +启动 OpenYurt 集群(例如通过 `local-up-openyurt`)时,建议在宿主机上对以下与 `inotify` 相关的内核参数做调优,以避免在大量文件被监听(例如容器运行时或开发工具)时出现 watch 队列耗尽等问题。 + +推荐配置: + +| 参数 | 推荐值 | +|------|--------| +| `fs.inotify.max_user_watches` | 524288 | +| `fs.inotify.max_user_instances` | 2048 | +| `fs.inotify.max_queued_events` | 524288 | + +重启前临时生效可执行: + +```bash +sudo sysctl -w fs.inotify.max_user_watches=524288 +sudo sysctl -w fs.inotify.max_user_instances=2048 +sudo sysctl -w fs.inotify.max_queued_events=524288 +``` + +若需开机持久化,将上述 `key=value` 写入 `/etc/sysctl.d/` 下的配置文件(例如 `/etc/sysctl.d/99-openyurt-local.conf`),然后执行 `sudo sysctl --system`。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/node-pool-management/delete-a-node-pool.md b/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/node-pool-management/delete-a-node-pool.md index 379369b0a18..d2bad46e7b7 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/node-pool-management/delete-a-node-pool.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/node-pool-management/delete-a-node-pool.md @@ -1,3 +1,64 @@ --- title: 删除节点池 ---- \ No newline at end of file +--- + +**OpenYurt NodePool 资源的最新版本为 `apps.openyurt.io/v1beta2`。** + +**字段与行为说明请以最新的 [API 参考](../../reference/api_reference.md) 为准。** + +## 约束说明 + +当节点池内**仍存在节点**时,**不能**删除该节点池。只有当集群中已没有任何仍属于该池的 Node 资源时,删除才会被允许。 + +### 节点上的节点池标签不可变更 + +节点上的绑定标签 `apps.openyurt.io/nodepool` **由 OpenYurt 管理**。节点与节点池建立关联后,**不要**指望通过删除或修改该标签来迁移节点或“解绑”: + +- 该标签**不支持**手动删除。 +- 该标签**不支持**手动修改(例如改为指向其他节点池)。 + +使用 `kubectl label` 去清空或改写该标签不是受支持的清空节点池方式。若要减少池中节点,唯一受支持的做法是让对应的 **Node 对象从集群中移除**(见下文)。 + +## 确认节点池内是否还有节点 + +通过标签查看仍属于该节点池的节点: + +```shell +kubectl get nodes -l apps.openyurt.io/nodepool=<节点池名称> +``` + +也可查看 NodePool 的 `status`: + +```shell +kubectl get nodepool <节点池名称> -o yaml +``` + +若 `status.nodes` 非空,说明池中仍有节点,此时不能删除该 NodePool。 + +## 删除节点池前:将节点从集群中移除 + +要删除某个 NodePool,当前仍属于该池的每个节点都必须**不再作为 Node 存在于集群中**。在按规范完成驱逐等工作后,从 API 中删除对应 Node(例如执行 `kubectl delete node <节点名称>`),具体步骤请遵循你们下线节点的运维规范。 + +若使用 `yurtadm`,典型流程可参考 [移除节点](../node-management/remove-a-node.md)。 + +待这些 Node 对象均已删除后,再次确认池中已无成员: + +```shell +kubectl get nodes -l apps.openyurt.io/nodepool=<节点池名称> +``` + +应无节点输出。 + +## 删除 NodePool + +确认节点池内已无节点后,删除资源: + +```shell +kubectl delete nodepool <节点池名称> +``` + +也可使用简写 `np`: + +```shell +kubectl delete np <节点池名称> +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/node-pool-management/edit-a-node-pool.md b/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/node-pool-management/edit-a-node-pool.md index a4dad72a40d..be8e934ca28 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/node-pool-management/edit-a-node-pool.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/user-manuals/node-pool-management/edit-a-node-pool.md @@ -1,3 +1,100 @@ --- title: 编辑节点池 ---- \ No newline at end of file +--- + +**OpenYurt NodePool 资源的最新版本为 `apps.openyurt.io/v1beta2`。** + +**字段说明请参阅最新的 [API 参考](../../reference/api_reference.md)。** + +## 概述 + +编辑 NodePool 会更新其 `spec` 中**允许修改**的字段(以及可选地更新 NodePool 对象 `metadata` 上的标签或注解)。部分 `spec` 字段在创建后不可变更,见下文 [创建后不可修改的字段](#创建后不可修改的字段)。nodepool 控制器会将允许下发的配置**同步到池内节点**:例如在 NodePool `spec` 中定义的标签、注解和污点,会下发到属于该池的节点。 + +本文说明如何使用 `kubectl` 修改 NodePool。 + +## 创建后不可修改的字段 + +以下 `spec` 字段在**创建时确定**,创建后**不支持**再编辑。若尝试修改,API 服务器或准入控制会拒绝该更新: + +- `spec.type`(`Cloud` 或 `Edge`) +- `spec.hostNetwork` +- `spec.interConnectivity` + +若需要不同的取值,请**新建 NodePool** 并在创建时写入目标配置,再按运维流程迁移节点(见 [创建节点池](./create-a-node-pool.md)、[接入节点](../node-management/join-a-node.md))。 + +## 开始之前 + +- 需已安装并运行 [Yurt-Manager](https://openyurt.io/docs/installation/manually-setup/#32-setup-openyurtopenyurt-components)(其中包含 nodepool 控制器)。 +- 需要具备在 `apps.openyurt.io` API 组下 **更新** `nodepools` 的 RBAC 权限。 + +## 查看当前 NodePool + +```shell +kubectl get nodepool +kubectl get np -o yaml +``` + +支持使用短名称 `np`。 + +## 编辑 NodePool + +### 方式一:`kubectl edit`(交互式) + +在编辑器中打开集群中的实时对象,保存退出后即向 API 提交变更。 + +```shell +kubectl edit nodepool +``` + +若校验或准入拒绝该变更,编辑器通常会重新打开并显示错误信息。 + +### 方式二:`kubectl apply`(声明式) + +维护 YAML 清单文件,修改所需字段后执行: + +```shell +kubectl apply -f nodepool.yaml +``` + +与操作其他 Kubernetes 对象一样,注意 `apiVersion`、`kind`、`metadata.name` 以及并发修改时的 `resourceVersion`,避免意外覆盖他人变更。 + +### 方式三:`kubectl patch`(局部更新) + +适合脚本或小范围修改。示例:在节点模板中新增或更新一个标签: + +```shell +kubectl patch nodepool --type merge -p '{"spec":{"labels":{"apps.openyurt.io/example":"updated"}}}' +``` + +请勿通过 patch(或其它更新方式)修改 [创建后不可修改的字段](#创建后不可修改的字段) 中所列项。 + +## 常见可修改项 + +各字段含义见 [创建节点池](./create-a-node-pool.md)。创建后通常仍可调整的包括: + +| 类别 | 示例字段 | +|------|----------| +| 节点模板 | `spec.annotations`、`spec.labels`、`spec.taints` | +| Hub Leader | `spec.enableLeaderElection`、`spec.leaderElectionStrategy`、`spec.leaderReplicas`、`spec.leaderNodeLabelSelector`、`spec.poolScopeMetadata` | + +更新 NodePool 上的标签、注解或污点后,请等待控制器完成同步,必要时在成员节点上核对: + +```shell +kubectl get node -o yaml +``` + +## 限制与相关说明 + +- **不可变的 `spec` 字段**:`spec.type`、`spec.hostNetwork`、`spec.interConnectivity` 在节点池创建后不可修改,详见 [创建后不可修改的字段](#创建后不可修改的字段)。 +- **资源名称**:不能通过修改 `metadata.name` 为 NodePool「改名」。若需使用新名称,应新建 NodePool 并按流程关联节点(见 [接入节点](../node-management/join-a-node.md))。 +- **节点归属**:节点属于哪个池由节点标签及控制器行为决定,不能仅靠任意编辑 NodePool 来「迁移」节点。节点上的绑定标签 `apps.openyurt.io/nodepool` 由 OpenYurt 管理,请勿依赖手工修改来移动节点,详见 [删除节点池](./delete-a-node-pool.md)。 +- **删除节点池**:见 [删除节点池](./delete-a-node-pool.md)。 + +## 编辑后验证 + +```shell +kubectl get nodepool -o yaml +kubectl get nodepools +``` + +若启用了 Leader 选举,可查看 `status.leaderEndpoints` 以及 `kubectl get nodepools` 的扩展列,说明见 [创建节点池](./create-a-node-pool.md)。