Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 2 additions & 10 deletions docs/core-concepts/raven.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,5 @@ Note: The Raven Controller Manger required by the Raven component is refactured
| v0.3.0 | openyurt/raven-agent:v0.3.0 | 2023.1 | feature | Support node IP forwarding |
| v0.4.0 | openyurt/raven-agent:0.4.0 | 2023.11 | feature | Support raven l7 proxy |
| v0.4.1 | openyurt/raven-agent:0.4.1 | 2024.3 | feature | Support raven l3 NAT traverse |
## 5. future plan

- Support SLB as public network exporter for gateway 【[issue #22](https://github.com/openyurtio/raven/issues/22)】
- Support NAT traversal 【[issue #13](https://github.com/openyurtio/raven/issues/13)】
- Support distribute route path decision 【[issue #14](https://github.com/openyurtio/raven/issues/14)】
- route path cost evaluation
- shortest path decision
- keep networking connection alive during paths change

Welcome interested students to join us and contribute code!!
| v0.4.2 | openyurt/raven-agent:0.4.2 | 2024.4 | feature | Support compatibility with iptables-nft |
| v0.4.3 | openyurt/raven-agent:0.4.3 | 2024.12 | feature | primarily focuses on stability improvements and bug fixes to enhance overall network reliability. |
12 changes: 1 addition & 11 deletions docs/developer-manuals/how-to-build-and-test.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,6 @@
title: How to Build and Test
---

In [OpenYurt repository](https://github.com/openyurtio/openyurt), currently`(v0.7.0, commit: 68a18ee)` 7 components are contained, including:

1. yurthub
2. yurt-controller-manager
3. yurt-tunnel-server
4. yurt-tunnel-agent
5. yurtctl
6. yurtadm
7. yurt-node-servant

This article will give you an introduction of how to build and test the code after development of above components.

## How to build
Expand All @@ -34,7 +24,7 @@ This command compiles yurtadm based on the OS and architecture of the local host
make docker-build TARGET_PLATFORMS="${TARGET_PLATFORMS}" REGION="${your_region}"
```

`TARGET_PLATFORMS`: indicates the OS and architecture to which the component will run. Currently, Linux/amd64, Linux/arm, and Linux/arm64 are supported.
`TARGET_PLATFORMS`: indicates the OS and architecture to which the component will run. Currently, linux/amd64, linux/arm, and linux/arm64 are supported.

`REGION`: This parameter affects the GOPROXY used during compilation. Users in China are advised to set `REGION=cn` to ensure proper construction (cn indicates `GOPROXY=https://goproxy.cn`).

Expand Down
51 changes: 46 additions & 5 deletions docs/developer-manuals/local-up-openyurt.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,22 @@
title: Local Up OpenYurt
---

## Environment preparation

Before you run the local cluster scripts, set up your environment as follows.

1. **Install Go** — Download and install a supported Go toolchain from [go.dev/dl](https://go.dev/dl/), then confirm with `go version`.

2. **Install Docker** — Install Docker Engine and ensure the Docker daemon is running. See [Get Docker](https://docs.docker.com/get-docker/).

3. **Add `GOPATH/bin` to `PATH`** — Binaries installed with `go install` (for example `kind`) are written to `$(go env GOPATH)/bin`. Add that directory to your `PATH` so your shell can find them.

```bash
export PATH="$(go env GOPATH)/bin:$PATH"
```

Reload the shell configuration or open a new terminal, then verify with `echo "$PATH"` or by running a tool you installed with `go install`.

## How to use

If you don't have the openyurt cluster, you can run the bash shell [`local-up-openyurt.sh`](https://github.com/openyurtio/openyurt/blob/master/hack/make-rules/local-up-openyurt.sh) to quickly set up the openyurt cluster at your local host.
Expand All @@ -13,6 +29,9 @@ make docker-build-and-up-openyurt

# startup a OpenYurt cluster based on prepared images
make local-up-openyurt

# run e2e test
make e2e-tests
```
Then you can use `kubectl` to interact with your OpenYurt cluster.

Expand All @@ -23,12 +42,12 @@ Then you can use `kubectl` to interact with your OpenYurt cluster.

In summary, the `local-up-openyurt.sh` will use the local files under the openyurt work path to set up the cluster. And you can specify the behavior of the shell through setting environment variables.

It will use `kind` to set up the kubernetes cluster. You can set `KUBERNETESVERSION` to specify the kubernetes version to use. For instance, `export KUBERNETESVERSION=1.23` before running the shell will enable you to use kubernetes v1.23. In addition, you can set `NODES_NUM` to specify the number of nodes the cluster will contain.
It will use `kind` to set up the kubernetes cluster. You can set `KUBERNETESVERSION` to specify the kubernetes version to use. For instance, `export KUBERNETESVERSION=1.34` before running the shell will enable you to use kubernetes v1.23. In addition, you can set `NODES_NUM` to specify the number of nodes the cluster will contain.
>Note:
>1. The format of `KUBERNETESVERSION` is `1.xx`, other formats will not be accepted. The default KUBERNETESVERSION is `1.22`.
>1. The format of `KUBERNETESVERSION` is `1.xx`, other formats will not be accepted. The default KUBERNETESVERSION is `1.34`.
>2. `NODES_NUM` should not be less than 2. Finally, the cluster will contains one control-plane node and `NODES_NUM-1` woker nodes. The default `NODES_NUM` is 2.

At last, Finally, OpenYurt components will be installed in kubernetes cluster, including 'Yurthub', 'Yurt-Controller-Manager', 'Yurt-tunnel-Agent' and 'Yurt-Tunnel-Server'.
At last, Finally, OpenYurt components will be installed in kubernetes cluster, including 'Yurthub', 'Yurt-Manager', 'Raven'.

By now, you've got the OpenYurt cluster at your local host and you can interact with it using `kubectl`. `kind` will automatically stored the kubeconfig at your `KUBECONFIG` path (default path is `${HOME}/.kube/config)`. If you already have the `KUBECONFIG` to interact with other clusters, `kind` will add a new context of openyurt cluster into the `KUBECONFIG` and automatically switch to it. You can manually switch back to the previous context using command `kubectl config use-context ${PREVIOUS_CONTEXT_NAME}`. For more details, you can see the [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). You can store the kubeconfig at another path through setting `KIND_KUBECONFIG`.

Expand All @@ -46,8 +65,30 @@ NODES_NUM represents the number of nodes to set up in the new-created cluster. T

3. **KUBERNETESVERSION**

KUBERNETESVERSION declares the kubernetes version the cluster will use. The format is "1.XX". Now from 1.17 to 1.23 are supported. The default value is 1.22.
KUBERNETESVERSION declares the kubernetes version the cluster will use. The format is "1.XX". Now from 1.32 and 1.34 are supported. The default value is 1.34.

4. **TIMEOUT**

TIMEOUT represents the time to wait for the kind control-plane, yurt-tunnel-server and yurt-tunnel-agent to be ready. If they are not ready after the duration, the shell will exit. The default value is 120s.
TIMEOUT represents the time to wait for the kind control-plane, Yurt-Manager to be ready. If they are not ready after the duration, the shell will exit. The default value is 120s.

## Tips

When you start an OpenYurt cluster (for example with `local-up-openyurt`), tune the following `inotify`-related kernel parameters on the host. Raising these limits helps avoid watch-queue exhaustion when many files are monitored (for example by container runtimes or development tools).

Recommended values:

| Parameter | Value |
|-----------|-------|
| `fs.inotify.max_user_watches` | 524288 |
| `fs.inotify.max_user_instances` | 2048 |
| `fs.inotify.max_queued_events` | 524288 |

To apply until reboot:

```bash
sudo sysctl -w fs.inotify.max_user_watches=524288
sudo sysctl -w fs.inotify.max_user_instances=2048
sudo sysctl -w fs.inotify.max_queued_events=524288
```

To persist across reboots, add the same `key=value` lines under `/etc/sysctl.d/` (for example `/etc/sysctl.d/99-openyurt-local.conf`) and run `sudo sysctl --system`.
9 changes: 1 addition & 8 deletions docs/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,14 +63,7 @@ Currently, the framework supports five filters within its chain, with the flexib

![resource-access-control](../static/img/docs/introduction/data-filtering-framework.png)

**6. Cloud-edge network bandwidth reduction**

A [performance test](https://openyurt.io/docs/test-report/yurthub-performance-test#traffic) has shown that in a large-scale OpenYurt cluster, the cloud-edge traffic will increase rapidly if pods are deleted and recreated since the kube-proxy components on the edge nodes watch for all endpoints/endpointslices changes. It's worth mentioning that identical endpoints data is transmitted to edge nodes within the same nodepool, which may not be the most efficient approach. This is due to the fact that cloud-edge networking traffic often relies on public networks, which can incur higher costs.
Leveraging the Yurt-Coordinator mentioned above, OpenYurt proposes to introduce a notion of pool-scoped metadata which are unique within a nodepool such as the endpoints/endpointslices data. As described in below Figure, the leader Yurthub will read the pool-scoped data from the cloud kube-apiserver and update the load to yurt-coordinator. As a result, all other YurtHubs will retrieve the pool-scoped data from the yurt-coordinator, eliminating the use of public network bandwidth for retrieving such data from the cloud kube-apiserver.

![bandwidth-reduction](../static/img/docs/introduction/bandwidth-reduction.png)

**7. Cloud-native edge device management**
**6. Cloud-native edge device management**

OpenYurt defines a set of APIs for managing edge devices through cloud Kubernetes controlplane. The APIs abstract the device’s basic properties, main capabilities and the data that should be transmitted between the cloud and the edge. OpenYurt provides integration with mainstream OSS IoT device management solutions, such as EdgeXFoundry using the APIs. As described in below Figure, An instance of YurtIoTDock component and EdgeXFoundry service are deployed in each nodepool. YurtIoTDock component can get the changes of Device CRD from cloud kube-apiserver and convert the desired spec of Device CRD to requests of EdgeXFoundry, then transmit the requests to EdgeXFoundry service in real-time. On the other hand, YurtIoTDock can subscribe to the device status from EdgeXFoundry service, and update the status of Device CRD when status is changed.

Expand Down
114 changes: 1 addition & 113 deletions docs/user-manuals/node-management/join-a-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,116 +57,4 @@ yurtadm join <apiserver-address>:6443 --token=<token> --node-type=cloud --discov

`yurtadm join` will automatically handle component installation.
- **Kubernetes Components**: It installs `kubelet`, `kubeadm`, etc. You can pre-place these binaries in your `$PATH` to use a specific version, but `yurtadm` will verify that their major and minor versions match the cluster's Kubernetes version.
- **CNI Binaries**: The join process pulls specially modified CNI binaries (e.g., for Flannel) to suit edge environments. If you have prepared your own CNI binaries, place them under `/opt/cni/bin` and use the `--reuse-cni-bin=true` flag with the `yurtadm join` command.



## 2. Converting an Existing Kubernetes Node

This method is for nodes that are already part of a standard Kubernetes cluster and you want to convert them into OpenYurt edge or cloud nodes. This involves labeling the node and manually setting up YurtHub.

### 2.1 Label and Annotate the Node

First, identify the node as either an `edge` or `cloud` node using a label.

**Labeling an edge node:**
```bash
kubectl label node <node-name> openyurt.io/is-edge-worker=true
```
> For a cloud node, set the label value to `false`.

**Enabling node autonomy (Optional):**
To prevent pods from being evicted when an edge node loses connection to the control plane, add the `autonomy-duration` annotation. The node.openyurt.io/autonomy-duration annotation will map to the tolerationSeconds field in the Pod, a value of 0 indicates that Pods will never be evicted. The duration format can be found [here](https://pkg.go.dev/maze.io/x/duration#ParseDuration).
```bash
kubectl annotate node <node-name> node.openyurt.io/autonomy-duration=0
```

**Adding the node to a NodePool (Optional):**
To leverage OpenYurt's unitization capabilities, you can assign the node to a `NodePool`.
```bash
# First, create a NodePool if it doesn't exist
cat <<EOF | kubectl apply -f -
apiVersion: apps.openyurt.io/v1beta2
kind: NodePool
metadata:
name: worker
spec:
type: Edge
EOF

# Then, label the node to associate it with the NodePool
kubectl label node <node-name> apps.openyurt.io/desired-nodepool=worker
```

### 2.2 Setup YurtHub

YurtHub is a critical component that acts as a proxy between the `kubelet` and the API server. It is typically deployed as a static pod.

1. **Prepare the YurtHub manifest:**
- Get a bootstrap token and the API server's address.
- Use these values to populate a YurtHub manifest template (e.g., [`config/setup/yurthub.yaml`](https://github.com/openyurtio/openyurt/blob/master/config/setup/yurthub.yaml)).

```bash
# Replace placeholders and copy the manifest to the target node\'s manifests directory
cat config/setup/yurthub.yaml |
sed 's|__kubernetes_master_address__|<apiserver_address>|;
s|__bootstrap_token__|<token>|' > /tmp/yurthub.yaml
scp /tmp/yurthub.yaml root@<node-ip>:/etc/kubernetes/manifests/
```

### 2.3 Reconfigure Kubelet

Next, reconfigure the `kubelet` on the node to communicate through YurtHub instead of directly with the API server.

1. **Create a new kubeconfig for kubelet:**
This kubeconfig points `kubelet` to the local YurtHub instance (`http://127.0.0.1:10261`).

```bash
# Run these commands on the target node
mkdir -p /var/lib/openyurt
cat << EOF > /var/lib/openyurt/kubelet.conf
apiVersion: v1
clusters:
- cluster:
server: http://127.0.0.1:10261
name: default-cluster
contexts:
- context:
cluster: default-cluster
namespace: default
user: default-auth
name: default-context
current-context: default-context
kind: Config
preferences: {}
EOF
```

2. **Update the kubelet service configuration:**
Modify the `kubelet`'s systemd drop-in file to use the new kubeconfig. The file path may vary depending on your OS (e.g., `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`).
```bash
sed -i "s|KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=\/etc\/kubernetes\/bootstrap-kubelet.conf\ --kubeconfig=\/etc\/kubernetes\/kubelet.conf|KUBELET_KUBECONFIG_ARGS=--kubeconfig=\/var\/lib\/openyurt\/kubelet.conf|g" \
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
```

3. **Restart the kubelet service:**
```bash
# Run on the target node
systemctl daemon-reload && systemctl restart kubelet
```
After restarting, verify the node returns to a `Ready` state using `kubectl get nodes`.

### 2.4 Restart Pods on the Node

Finally, to ensure all pods on the node communicate through YurtHub, they must be recreated.

**Warning:** This operation will cause a brief service interruption. Confirm the impact on your production environment before proceeding.

```bash
# 1. List all pods running on the converted node
kubectl get pod -A -o wide | grep <node-name>

# 2. Delete all user pods and system pods (except the yurthub pod)
# The Kubernetes controllers will automatically recreate them.
kubectl delete pod <pod-1> <pod-2> -n <namespace>
```
- **CNI Binaries**: The join process pulls specially modified CNI binaries (e.g., for Flannel) to suit edge environments. If you have prepared your own CNI binaries, place them under `/opt/cni/bin` and use the `--reuse-cni-bin=true` flag with the `yurtadm join` command.
63 changes: 62 additions & 1 deletion docs/user-manuals/node-pool-management/delete-a-node-pool.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,64 @@
---
title: Delete a node pool
---
---

**The latest version of OpenYurt NodePool resource is `apps.openyurt.io/v1beta2`.**

**Please refer to the latest [API Reference](../../reference/api_reference.md) for details.**

## Constraints

You **cannot** delete a NodePool while it still contains one or more nodes. The control plane rejects the delete request until no Node in the cluster is still a member of that pool.

### Node pool label on nodes is immutable

The binding label `apps.openyurt.io/nodepool` on a Node is **managed by OpenYurt**. After a node is associated with a pool, you **must not** rely on deleting or changing this label to move or “unbind” the node:

- The label **cannot** be removed manually.
- The label **cannot** be modified manually (for example, to point the node at another pool).

Trying to clear or edit it with `kubectl label` is not a supported way to empty a NodePool. The only supported way to reduce pool membership is for the corresponding **Node objects to leave the cluster** (see below).

## Check whether the pool has nodes

List nodes that belong to the pool by label:

```shell
kubectl get nodes -l apps.openyurt.io/nodepool=<NodePoolName>
```

You can also inspect the NodePool status:

```shell
kubectl get nodepool <NodePoolName> -o yaml
```

If `status.nodes` is non-empty, the pool still has members and cannot be deleted.

## Before deleting the NodePool: remove nodes from the cluster

To delete a NodePool, every node that currently belongs to that pool must **no longer exist as a Node** in the cluster. In practice, after you have drained workloads as appropriate, remove each such node from the API (for example `kubectl delete node <NodeName>`), following your operational procedure for decommissioning nodes.

For a typical flow using `yurtadm`, see [Remove a node](../node-management/remove-a-node.md).

After those Node objects are gone, confirm the pool has no members:

```shell
kubectl get nodes -l apps.openyurt.io/nodepool=<NodePoolName>
```

The output should list no nodes.

## Delete the NodePool

When the pool has no nodes, delete the resource:

```shell
kubectl delete nodepool <NodePoolName>
```

The short name `np` is also supported:

```shell
kubectl delete np <NodePoolName>
```
Loading