diff --git a/docs/common/dev/_rknn-ultralytics.mdx b/docs/common/dev/_rknn-ultralytics.mdx
index 419694f8f..969da4491 100644
--- a/docs/common/dev/_rknn-ultralytics.mdx
+++ b/docs/common/dev/_rknn-ultralytics.mdx
@@ -1,5 +1,5 @@
:::tip
-本文档旨在演示如何在 rk3588/356X 上推理 YOLOv11 目标检测模型,所需环境配置请参考[ RKNN 安装](./rknn_install)
+本文档旨在演示如何在 rk3588/356X 上推理 YOLOv11 目标检测模型,所需环境配置请参考[ RKNN 安装](./rknn-install)
:::
目前 [Ultralytics](https://docs.ultralytics.com/integrations/rockchip-rknn/) 官方已经支持 rknn 平台,RK3588/356X 产品用户可以直接使用 `ultralytics` 库进行 yolov11 的模型转换和模型部署。
diff --git a/i18n/en/docusaurus-plugin-content-docs/current/common/ai/_rknn_custom_yolo.mdx b/i18n/en/docusaurus-plugin-content-docs/current/common/ai/_rknn_custom_yolo.mdx
index 3895d07fb..7dff09f16 100644
--- a/i18n/en/docusaurus-plugin-content-docs/current/common/ai/_rknn_custom_yolo.mdx
+++ b/i18n/en/docusaurus-plugin-content-docs/current/common/ai/_rknn_custom_yolo.mdx
@@ -135,7 +135,7 @@ In this example, FP16 RKNN inference time is **64.3 ms**.
If your model is not an Ultralytics export, RKNN Model Zoo provides `python/convert.py` scripts under the corresponding YOLO example directories.
Export your model to ONNX first and then convert to RKNN with `quant_dtype=fp`.
-See: [Deploy YOLOv5 on the Device](rknn_toolkit_lite2_yolov5).
+See: [Deploy YOLOv5 on the Device](rknn-toolkit-lite2-yolov5).
## Approach B: INT8 RKNN (best performance)
diff --git a/i18n/en/docusaurus-plugin-content-docs/current/common/ai/rockchip/_rknn_model_zoo.mdx b/i18n/en/docusaurus-plugin-content-docs/current/common/ai/rockchip/_rknn_model_zoo.mdx
index 3e64867b1..2507b917d 100644
--- a/i18n/en/docusaurus-plugin-content-docs/current/common/ai/rockchip/_rknn_model_zoo.mdx
+++ b/i18n/en/docusaurus-plugin-content-docs/current/common/ai/rockchip/_rknn_model_zoo.mdx
@@ -1,8 +1,8 @@
-RKNN Model Zoo 基于 RKNPU SDK 工具链开发,提供了目前主流算法的部署示例。包含导出 RKNN 模型, 使用 Python API,C API 推理 RKNN 模型的流程。
+RKNN Model Zoo is built on top of the RKNPU SDK toolchain and provides deployment examples for many mainstream algorithms. It covers the full workflow of exporting RKNN models and performing inference using both the Python API and the C API.
-RKNN Model Zoo 依赖 RKNN-Toolkit2 进行模型转换, 编译 C API demo 时需要用到对应的编译工具链。
+RKNN Model Zoo depends on **RKNN-Toolkit2** for model conversion. When compiling C API demos, you also need the corresponding cross-compilation toolchain.
-## 下载仓库
+## Clone the Repository
@@ -13,7 +13,7 @@ git clone -b v2.3.2 https://github.com/airockchip/rknn_model_zoo.git
-## 仓库目录结构
+## Repository Structure
```bash
./
@@ -24,7 +24,7 @@ git clone -b v2.3.2 https://github.com/airockchip/rknn_model_zoo.git
├── build-linux.sh
├── datasets
├── docs
-├── examples # 示例目录
+├── examples # example directory
│ ├── clip
│ ├── deeplabv3
│ ├── lite_transformer
@@ -80,19 +80,19 @@ git clone -b v2.3.2 https://github.com/airockchip/rknn_model_zoo.git
└── image_utils.h
```
-## 基本使用流程
+## Basic Usage Flow
### C API
-使用根目录下的 build-linux.sh 脚本进行编译。
+Use the `build-linux.sh` script in the repository root to compile the C demos.
-想要在 x64 主机上编译出能在 arm64 设备运行的可执行程序,你需要下载交叉编译工具链。
+If you want to build executables on an x86_64 host that run on an arm64 target device, you must first download a cross-compilation toolchain.
-点击下载:[交叉编译工具链](https://developer.arm.com/-/media/files/downloads/gnu/11.2-2022.02/binrel/gcc-arm-11.2-2022.02-x86_64-aarch64-none-linux-gnu.tar.xz?rev=33c6e30e5ac64e6dba8f0431f2c35f1b&revision=33c6e30e-5ac6-4e6d-ba8f-0431f2c35f1b&hash=632C6C0BD43C3E4B59CA8A09A7055D30)。
+Download link: [Cross-compilation toolchain](https://developer.arm.com/-/media/files/downloads/gnu/11.2-2022.02/binrel/gcc-arm-11.2-2022.02-x86_64-aarch64-none-linux-gnu.tar.xz?rev=33c6e30e5ac64e6dba8f0431f2c35f1b&revision=33c6e30e-5ac6-4e6d-ba8f-0431f2c35f1b&hash=632C6C0BD43C3E4B59CA8A09A7055D30).
-下载完成之后解压即可。
+After the download is complete, extract the archive.
-使用脚本前需要导出编译器路径到环境变量,让脚本能找到下载的交叉编译器。
+Before using the build script, you need to export the compiler path to an environment variable so that the script can find the downloaded cross toolchain.
@@ -102,7 +102,7 @@ export GCC_COMPILER=/path/to/your/gcc/bin/aarch64-linux-gnu
-脚本基本使用格式:
+Basic usage of the script:
@@ -113,9 +113,9 @@ export GCC_COMPILER=/path/to/your/gcc/bin/aarch64-linux-gnu
-d : demo name
-b : build_type(Debug/Release)
-m : enable address sanitizer, build_type need set to Debug
-Note: 'rk356x' represents rk3562/rk3566/rk3568.
+Note: `rk356x` represents rk3562/rk3566/rk3568.
-# 以编译 RK3566 的 yolov5 demo 为例:
+# Example: build the YOLOv5 demo for RK3566:
./build-linux.sh -t rk356x -a aarch64 -d yolov5
```
@@ -123,9 +123,9 @@ Note: 'rk356x' represents rk3562/rk3566/rk3568.
### Python API
-Activate the virtual environment,将模型转换为 rknn 格式之后进入目标示例目录直接运行对应的 python 脚本即可。
+Activate the virtual environment. After the model has been converted to RKNN format, enter the target example directory and run the corresponding Python script directly.
-以 RK3566 的 yolov5 demo 为例:
+For example, to run the YOLOv5 demo on an RK3566 target:
diff --git a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-install.mdx b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-install.mdx
index 3d2e1f67a..e433d6da4 100644
--- a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-install.mdx
+++ b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-install.mdx
@@ -1,208 +1,213 @@
-:::tip
-This document aims to demonstrate how to install the RKNN SDK. For more information, please refer to the [RKNN Toolkit2 source repository](https://github.com/rockchip-linux/rknn-toolkit2) in the doc directory.
+:::info
+This document demonstrates how to install the RKNN SDK. For more detailed information, please refer to the `doc` directory of the [RKNN Toolkit2 repository](https://github.com/rockchip-linux/rknn-toolkit2).
:::
-## Introduction to RKNN
+## RKNN Overview
-The Rockchip RK3566/RK3568 series, RK3588 series, RK3562 series, and RV1103/RV1106 series chips are equipped with a neural network processor (NPU). RKNN helps users quickly deploy AI models onto Rockchip chips using NPU hardware acceleration for model inference. To use RKNPU, users first need to convert their trained models to RKNN format using the RKNN-Toolkit2 tool on an x86 computer, and then perform inference on the development board using the RKNN C API or Python API.
+Rockchip RK3566/RK3568 series, RK3588 series, RK3562 series, and RV1103/RV1106 series chips are equipped with an NPU (Neural Processing Unit).
+Using RKNN helps you quickly deploy AI models to Rockchip chips and run inference accelerated by the NPU hardware.
+To use RKNPU, you first need to use the RKNN-Toolkit2 tools on an x86 PC to convert trained models into RKNN-format models, and then run inference on the target board using the RKNN C API or Python API.
-Required tools:
+The toolchain consists of:
-- **RKNN-Toolkit2** is a software development kit for users to perform model conversion, inference, and performance evaluation on PC and Rockchip NPU platforms.
-- **RKNN-Toolkit-Lite2** provides a Python programming interface for the Rockchip NPU platform, helping users deploy RKNN models and accelerate AI application implementation.
-- **RKNN Runtime** offers a C/C++ programming interface for the Rockchip NPU platform, assisting users in deploying RKNN models and accelerating AI application implementation.
-- **RKNPU Kernel Driver** is responsible for interacting with the NPU hardware.
+- **RKNN-Toolkit2**: a software development toolkit for performing model conversion, inference, and performance evaluation on both PC and Rockchip NPU platforms.
+- **RKNN-Toolkit-Lite2**: provides Python APIs on Rockchip NPU platforms to deploy RKNN models and accelerate AI applications.
+- **RKNN Runtime**: provides C/C++ APIs on Rockchip NPU platforms to deploy RKNN models and accelerate AI applications.
+- **RKNPU kernel driver**: responsible for interacting with the NPU hardware.
The overall framework is as follows:

-## Installing the RKNN Environment
+## Download the SDK
-### Download RKNN-Toolkit2 Repository on PC
+Clone the RKNN-Toolkit2 repository.
-- Download the RKNN repository.
+:::tip
+It is recommended to create a dedicated directory to store the related SDKs.
+:::
- It is recommended to create a new directory to store the RKNN repository. For example, create a folder named "Projects" and store the RKNN-Toolkit2 and RKNN Model Zoo repositories in that directory.
+
-
+```bash
+mkdir RKSDK && cd RKSDK
+git clone -b v2.3.2 https://github.com/airockchip/rknn-toolkit2.git
+```
- ```bash
- # Create Projects folder
- mkdir Projects && cd Projects
- # Download RKNN-Toolkit2 repository
- git clone -b v2.3.0 https://github.com/airockchip/rknn-toolkit2.git
- # Download RKNN Model Zoo repository
- git clone -b v2.3.0 https://github.com/airockchip/rknn_model_zoo.git
- ```
+
-
+## Version Information
-- (Optional) Install [Anaconda](https://www.anaconda.com/)
+:::info
+RKNN-Toolkit2 uses a Python environment for model conversion, quantization, and other operations. On the target board, the Python environment is also used to interact with the NPU driver via Python APIs.
+Different CPU architectures and OS versions require different Python versions and environments. Please refer to the following tables to choose the appropriate setup.
+It is recommended to use a Python environment manager. We recommend [Miniforge](https://conda-forge.org/miniforge/).
+:::
- If Python 3.8 (recommended version) is not installed on your system, or if you have multiple versions of Python environments, it is recommended to use [Anaconda](https://www.anaconda.com/) to create a new Python 3.8 environment.
+The version mapping is as follows:
- - Install Anaconda
+- When using RKNN-Toolkit2, the following runtime environments are required:
- Execute the following command in the terminal to check if Anaconda is installed. If it is installed, you can skip this step.
+| OS Version | Ubuntu18.04 (x64) | Ubuntu20.04 (x64) | Ubuntu22.04 (x64) | Ubuntu24.04 (x64) |
+| ---------- | ----------------- | ----------------- | ----------------- | ----------------- |
+| Python | 3.6 / 3.7 | 3.8 / 3.9 | 3.10 / 3.11 | 3.12 |
-
+- ARM64 runtime requirements:
- ```bash
- $ conda --version
- conda 24.9.2
- ```
+| OS Version | Debian10 (arm64) | Debian11 (arm64) | Debian12 (arm64) |
+| ---------- | ---------------- | ---------------- | ------------------ |
+| Python | 3.6 / 3.7 | 3.8 / 3.9 | 3.10 / 3.11 / 3.12 |
-
+:::info
+Based on the version information above, configure the virtual environment on your host by following the steps below. If you plan to use the Python API on the target board, you must configure a matching virtual environment on the board as well.
+:::
- If "conda: command not found" appears, it indicates that Anaconda is not installed. Please refer to the [Anaconda](https://www.anaconda.com/) official website for installation instructions.
+## Install Miniforge
- - Create a conda environment
+
-
+```bash
+wget https://github.com/conda-forge/miniforge/releases/download/25.11.0-0/Miniforge3-25.11.0-0-Linux-x86_64.sh
+chmod +x Miniforge3-25.11.0-0-Linux-x86_64.sh
+bash Miniforge3-25.11.0-0-Linux-x86_64.sh
+```
- ```bash
- conda create -n rknn python=3.8.2
- ```
+
-
+## Create a Virtual Environment
- - Activate the rknn conda environment
+
-
+```bash
+conda create -n rknn python=3.8
+```
- ```bash
- conda activate rknn
- ```
+
-
+## Activate the Virtual Environment
- - Exit the environment
+
-
+```bash
+conda activate rknn
+```
- ```bash
- conda deactivate
- ```
+
-
+## Install rknn-toolkit2
-### Install RKNN-Toolkit2 on PC
+
-- After activating the conda rknn environment, enter the rknn-toolkit2 directory and install dependencies according to your architecture platform and Python version by selecting the corresponding `requirements_cpxx.txt`. Install RKNN-Toolkit2 via wheel package. Here is an example command for a 64-bit x86 architecture Python 3.8 environment:
+```bash
+cd rknn-toolkit2/rknn-toolkit2/packages/x86_64
+pip install rknn_toolkit2-2.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
+```
+
+
-
+For arm64 architecture (i.e., on the target board):
- ```bash
- # Enter rknn-toolkit2 directory
- cd rknn-toolkit2/rknn-toolkit2/packages/x86_64/
- # Please select the appropriate requirements file based on your Python version; here it is for python3.8
- pip3 install -r requirements_cp38-2.3.0.txt
- # Please select the appropriate wheel installation package based on your Python version and processor architecture:
- pip3 install ./rknn_toolkit2-2.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- ```
+
-
+```bash
+cd rknn-toolkit2/rknn-toolkit2/packages/arm64
+pip install rknn_toolkit2-2.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
+```
- :::tip
- If you are using an ARM architecture platform, please install the dependencies under arm64.
- :::
+
-- Verify successful installation
+## Verify the Installation
- Execute the following command. If there are no errors, it means that the RKNN-Toolkit2 environment has been installed successfully.
+Run the following commands. If no error is reported, the RKNN-Toolkit2 environment has been installed successfully.
-
+
- ```bash
- $ python3
- >>> from rknn.api import RKNN
- ```
+```bash
+$ python3
+>>> from rknn.api import RKNN
+```
-
+
-### Install RKNN Toolkit Lite2 and its dependencies on the board
+## Install rknn-toolkit-lite2
:::tip
-For users of RK356X products, you need to enable the NPU in the terminal using **rsetup** before using the NPU: `sudo rsetup -> Overlays -> Manage overlays -> Enable NPU`, then restart the system.
-
-If there is no `Enable NPU` option in the overlays options, please update the system via: `sudo rsetup -> system -> System Update`, restart, and execute the above steps to enable the NPU.
-:::
-:::info
-The official Radxa image has RKNPU2 and its required dependencies installed by default. You only need to install `python3-rknnlite2`. If it does not run, you can try commenting out the command.
+Compared with `rknn_toolkit2`, `rknn-toolkit-lite2` removes the model-conversion functionality and only provides the Python APIs for NPU inference.
+It has a much smaller footprint and is suitable for users who only need to run inference on the target board. Choose the appropriate `rknn-toolkit-lite2` package according to the Python version on your board.
:::
-
+
```bash
-sudo apt update
-sudo apt install python3-rknnlite2
-# sudo apt install rknpu2-rk3588 # If the SOC is RK3588 series
-# sudo apt install rknpu2-rk356x # If the SOC is RK356X series
+cd rknn-toolkit2/rknn-toolkit-lite2/packages
+pip3 install rknn_toolkit2-2.3.2-cp3X-cp3X-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
```
-- If you are using the CLI version, you can access the rknn toolkit lite2 [deb package download link](https://github.com/radxa-pkg/rknn2/releases)
+Run the following commands. If no error is reported, the `rknn_toolkit-lite2` environment has been installed successfully.
-- Check the rknpu2 driver version. The output below indicates that the rknpu2 driver version is 0.9.6.
- :::tip
- rk356X product system, rknpu2 driver version is 0.8.8
- :::
+
-
+```bash
+$ python3
+>>> from rknnlite.api import RKNNLite as RKNN
+```
- ```bash
- sudo dmesg | grep "Initialized rknpu"
- [ 15.522298] [drm] Initialized rknpu 0.9.6 20240322 for fdab0000.npu on minor 1
- ```
+
+
+## Toolchain for Cross Compilation
-
+To compile programs that run on the target board, you need a cross-compilation toolchain.
-### (Optional) Install rknn_toolkit-lite2 Python API in a virtual environment on the board
+Download: [Cross-compilation toolchain](https://developer.arm.com/-/media/files/downloads/gnu/11.2-2022.02/binrel/gcc-arm-11.2-2022.02-x86_64-aarch64-none-linux-gnu.tar.xz?rev=33c6e30e5ac64e6dba8f0431f2c35f1b&revision=33c6e30e-5ac6-4e6d-ba8f-0431f2c35f1b&hash=632C6C0BD43C3E4B59CA8A09A7055D30).
-If you prefer to use the Python venv virtual environment on the board system, you need to install the rknn_toolkit-lite2 wheel separately.
+After the download is complete, extract the archive.
-For instructions on using the virtual environment, please refer to [Python Virtual Environment Usage](./venv_usage).
+Before compiling, export the compiler path to an environment variable so that scripts can locate the downloaded cross-compiler.
-
+
```bash
-cd rknn-toolkit2/rknn-toolkit-lite2/packages/
+export GCC_COMPILER=/path/to/your/gcc/bin/aarch64-linux-gnu
```
-Copy the corresponding `rknn_toolkit_lite2-2.3.0-cp3X-cp3X-manylinux_2_17_aarch64.manylinux2014_aarch64.whl` to the board based on the **Python version** of the board system.
-
-After entering the virtual environment, use pip3 to install:
+## NPU Driver Configuration on the Board
-
-
-```bash
-pip3 install ./rknn_toolkit_lite2-2.3.0-cp3X-cp3X-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
-```
+:::tip
+For RK356X products, you must enable the NPU with **rsetup** before using it:
+`sudo rsetup` -> `Overlays` -> `Manage overlays` -> `Enable NPU`, then reboot the system.
-
+If there is no `Enable NPU` option in `Overlays`, please run: `sudo rsetup` -> `System` -> `System Update` to upgrade the system, then reboot and repeat the steps above to enable the NPU.
+:::
-Execute the following command. If there are no errors, it means that the rknn_toolkit-lite2 environment has been installed successfully.
+:::info
+Radxa official images come with RKNPU2 and its required dependencies preinstalled. If you encounter issues running NPU workloads, you can try uncommenting and executing the following commands.
+:::
-
+
```bash
-$ python3
->>> from rknnlite.api import RKNNLite as RKNN
+# sudo apt update
+# sudo apt install rknpu2-rk3588 #SOC为RK3588系列
+# sudo apt install rknpu2-rk356x #SOC为RK356X系列
```
-### (Optional) Install RKNN Model Zoo on the board
+Check the `rknpu2` driver version. In the example below, the `rknpu2` driver version is 0.9.6. It is recommended to use an RKNPU driver version >= 0.9.2.
+
+:::info
+On RK356X product systems, the `rknpu2` driver version may be 0.8.8.
+:::
-
+
```bash
-# Download RKNN Model Zoo repository
-git clone -b v2.3.0 https://github.com/airockchip/rknn_model_zoo.git
+sudo dmesg | grep "Initialized rknpu"
+[ 15.522298] [drm] Initialized rknpu 0.9.6 20240322 for fdab0000.npu on minor 1
```
diff --git a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov5.mdx b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov5.mdx
index 93f5c0eca..d968b8515 100644
--- a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov5.mdx
+++ b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov5.mdx
@@ -1,21 +1,21 @@
:::tip
-This document demonstrates how to perform on-device inference for YOLOv5 object detection models on Rockchip RK3588/3566 series chips. For environment setup, refer to [RKNN Installation](./rknn_install).
+This document demonstrates how to run on-device inference of the YOLOv5 object detection model on Rockchip RK3588/3566 series chips. For the required environment setup, please refer to [RKNN Installation](./rknn-install).
:::
-This example uses a pre-trained ONNX model from the [rknn_model_zoo](https://github.com/airockchip/rknn_model_zoo) as a case study, showcasing the complete workflow from model conversion to on-device inference.
+This example uses a pre-trained ONNX model from the [rknn_model_zoo](https://github.com/airockchip/rknn_model_zoo) as a case study, showing the complete process from model conversion to on-device inference.
-Deploying YOLOv5 with RKNN involves two steps:
+Deploying YOLOv5 with RKNN involves two main steps:
-- Use `rknn-toolkit2` on a PC to convert models from various frameworks to RKNN format.
-- Use the Python API of `rknn-toolkit2-lite` on the device for model inference.
+- On the PC, use **rknn-toolkit2** to convert models from different frameworks into RKNN format.
+- On the device, use the Python API of **rknn-toolkit2-lite** to run inference.
### Model Conversion on PC
:::tip
-Radxa provides a pre-converted `yolov5s_rk35XX.rknn` model. Users can directly skip to [On-Device YOLOv5 Inference](#on-device-yolov5-inference) and skip the PC model conversion section.
+Radxa provides a pre-converted `yolov5s_rk35XX.rknn` model. Users can skip the PC-side model conversion section and directly refer to [YOLOv5 Inference on Device](#yolov5-inference-on-device).
:::
-- Activate the `rknn` Conda environment if you are using Conda:
+- If you are using Conda, first activate the `rknn` Conda environment:
@@ -41,22 +41,22 @@ Radxa provides a pre-converted `yolov5s_rk35XX.rknn` model. Users can directly s
```bash
cd rknn_model_zoo/examples/yolov5/model
+ # Download the pre-trained yolov5s_relu.onnx model
bash download_model.sh
```
- If network issues occur, download the model from [this page](https://github.com/airockchip/rknn_model_zoo?tab=readme-ov-file#model-support) and place it in the appropriate directory.
+ If you encounter network issues, you can visit [this page](https://github.com/airockchip/rknn_model_zoo?tab=readme-ov-file#model-support) to manually download the model and place it in the corresponding folder.
-- Convert the model to RKNN format using `rknn-toolkit2`:
+- Convert the ONNX model to `yolov5s_relu.rknn` using **rknn-toolkit2**:
```bash
cd rknn_model_zoo/examples/yolov5/python
python3 convert.py
- # Example:
- python3 convert.py ../model/yolov5s_relu.onnx rk3588 i8 ../model/yolov5s_relu_rk3588.rknn
+ # python3 convert.py ../model/yolov5s_relu.onnx rk3588 i8 ../model/yolov5s_relu_rk3588.rknn
```
@@ -64,21 +64,27 @@ Radxa provides a pre-converted `yolov5s_rk35XX.rknn` model. Users can directly s
Parameter explanation:
- ``: Path to the ONNX model.
- - ``: Target NPU platform. Options include `rk3562, rk3566, rk3568, rk3576, rk3588, rk1808, rv1109, rv1126`.
- - ``: Choose `i8` for int8 quantization or `fp` for fp16 quantization. Default is `i8`.
- - ``: Path to save the RKNN model. Defaults to the same directory as the ONNX model.
+ - ``: Name of the NPU platform. Options: `rk3562, rk3566, rk3568, rk3576, rk3588, rk1808, rv1109, rv1126`.
+ - ``: Choose `i8` or `fp`. `i8` is for INT8 quantization; `fp` is for FP16 quantization. The default is `i8`.
+ - ``: Path to save the RKNN model. By default it is saved in the same directory as the ONNX model.
-- Transfer the RKNN model to the target device.
+ :::tip
+ For RK358X users, set `TARGET_PLATFORM` to `rk3588`.
+ :::
-### On-Device YOLOv5 Inference
+- Copy the generated RKNN model to the device.
+
+### YOLOv5 Inference on Device
:::tip
-For users of RK356X products, you need to enable the NPU in the terminal using **rsetup** before using the NPU: `sudo rsetup -> Overlays -> Manage overlays -> Enable NPU`, then restart the system.
+For RK356X products, you must enable the NPU using **rsetup** before running NPU workloads:
+
+`sudo rsetup -> Overlays -> Manage overlays -> Enable NPU`, then reboot the system.
-If there is no `Enable NPU` option in the overlays options, please update the system via: `sudo rsetup -> system -> System Update`, restart, and execute the above steps to enable the NPU.
+If there is no `Enable NPU` option in `Overlays`, please run: `sudo rsetup -> System -> System Update` to upgrade the system, reboot, and then repeat the above steps to enable the NPU.
:::
-- (Optional) Download pre-converted YOLOv5s RKNN models provided by Radxa:
+- (Optional) Download the YOLOv5s RKNN models prepared by Radxa:
| Platform | Download Link |
| -------- | ------------------------------------------------------------------------------------------------------------------------ |
@@ -86,7 +92,9 @@ If there is no `Enable NPU` option in the overlays options, please update the sy
| rk3568 | [yolov5s_rk3568.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3568.rknn) |
| rk3588 | [yolov5s_rk3588.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3588.rknn) |
-- Modify `rknn_model_zoo/py_utils/rknn_executor.py` (backup the original code):
+- Modify `rknn_model_zoo/py_utils/rknn_executor.py` (**remember to back up the original code**):
+
+ Configure the RKNN Model Zoo repository as described in [RKNN Model Zoo](./rknn-model-zoo).
@@ -101,12 +109,17 @@ If there is no `Enable NPU` option in the overlays options, please update the sy
self.rknn = rknn
def run(self, inputs):
- if not self.rknn:
- print("ERROR: RKNN has been released")
+ if self.rknn is None:
+ print("ERROR: rknn has been released")
return []
- inputs = [inputs] if not isinstance(inputs, (list, tuple)) else inputs
+ if isinstance(inputs, list) or isinstance(inputs, tuple):
+ pass
+ else:
+ inputs = [inputs]
+
result = self.rknn.inference(inputs=inputs)
+
return result
def release(self):
@@ -116,7 +129,7 @@ If there is no `Enable NPU` option in the overlays options, please update the sy
-- Update line 262 in `rknn_model_zoo/examples/yolov5/python/yolov5.py` (backup the original code):
+- Modify line 262 in `rknn_model_zoo/examples/yolov5/python/yolov5.py` (**remember to back up the original code**):
@@ -128,19 +141,16 @@ If there is no `Enable NPU` option in the overlays options, please update the sy
- Enter the virtual environment:
- Refer to [Python Virtual Environment Usage](./venv_usage).
- Install the `rknn_toolkit-lite2` Python API as described in [Install rknn_toolkit-lite2 Python API in a virtual environment on the board](rknn_install#optional-install-rknn_toolkit-lite2-python-api-in-a-virtual-environment-on-the-board).
+ For virtual environment usage, refer to [Python Virtual Environment Usage](../venv-usage).
-- Install dependencies:
+ To install the `rknn_toolkit-lite2` Python API, see [Install rknn_toolkit-lite2](./rknn-install#Install-rknn-toolkit-lite2).
-
+- Install dependencies:
```bash
pip3 install opencv-python-headless
```
-
-
- Run the YOLOv5 example:
@@ -152,7 +162,7 @@ If there is no `Enable NPU` option in the overlays options, please update the sy
- If using a custom-converted model, copy it from the PC to the device and specify the path with `--model_path`.
+ If you are using a model converted on the PC, copy it from the PC to the device and specify the model path with the `--model_path` parameter.
@@ -182,9 +192,9 @@ If there is no `Enable NPU` option in the overlays options, please update the sy
Parameter explanation:
- `--model_path`: Path to the RKNN model.
- - `--img_folder`: Folder of images for inference. Default: `../model`.
- - `--img_save`: Save inference results to `./result`. Default: `False`.
+ - `--img_folder`: Folder containing images for inference, default is `../model`.
+ - `--img_save`: Whether to save the inference result images to `./result`. Default is `False`.
-- All inference results are saved in `./result`.
+- All inference results are stored in the `./result` directory.
diff --git a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov8.mdx b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov8.mdx
index ea2e1a7e2..048290dba 100644
--- a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov8.mdx
+++ b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov8.mdx
@@ -1,19 +1,19 @@
:::tip
-This document demonstrates how to run on-board inference of the YOLOv8 object detection model on the RK3588. For the required environment setup, please refer to [RKNN Installation](./rknn_install).
+This document demonstrates how to run YOLOv8 object detection inference on the RK3588 board. For the required environment setup, please refer to [RKNN Installation](./rknn-install).
:::
-This example uses a pretrained ONNX model from the [rknn_model_zoo](https://github.com/airockchip/rknn_model_zoo) as an example to illustrate the complete process of converting the model and performing inference on the board. The target platform for this example is RK3588.
+This example uses a pre-trained ONNX model from the [rknn_model_zoo](https://github.com/airockchip/rknn_model_zoo) to illustrate the complete process from model conversion on the PC to on-device inference. The target platform in this example is RK3588.
-Using RKNN to deploy YOLOv8 involves two steps:
+Deploying YOLOv8 with RKNN involves two main steps:
-- On a PC, use **rknn-toolkit2** to convert models from different frameworks into RKNN format.
-- On the board, use **rknn-toolkit2-lite**'s Python API for model inference.
+- On the PC, use **rknn-toolkit2** to convert models from different frameworks into RKNN format.
+- On the device, use the Python API of **rknn-toolkit2-lite** to run inference.
### Model Conversion on PC
-**Radxa has provided a pre-converted `yolov8.rknn` model. Users can skip this section and refer to [YOLOv8 Inference on Board](#yolov8-inference-on-board).**
+**Radxa provides a pre-converted `yolov8.rknn` model. Users can skip the PC-side model conversion section and directly refer to [YOLOv8 Inference on Device](#yolov8-inference-on-device).**
-- If using Conda, activate the RKNN environment first:
+- If you are using Conda, first activate the `rknn` Conda environment:
@@ -23,21 +23,21 @@ Using RKNN to deploy YOLOv8 involves two steps:
-- Download the YOLOv8 ONNX model:
+- Download the `yolov8.onnx` model:
```bash
cd rknn_model_zoo/examples/yolov8/model
- # Download the pretrained yolov8n.onnx model
+ # Download the pre-trained yolov8n.onnx model
bash download_model.sh
```
- If you encounter network issues, you can download the corresponding model manually from [this page](https://github.com/airockchip/rknn_model_zoo?tab=readme-ov-file#model-support) and place it in the appropriate folder.
+ If you encounter network issues, you can visit [this page](https://github.com/airockchip/rknn_model_zoo?tab=readme-ov-file#model-support) to manually download the model and place it in the corresponding folder.
-- Convert the ONNX model to the RKNN format using **rknn-toolkit2**:
+- Convert the ONNX model to `yolov8n.rknn` using **rknn-toolkit2**:
@@ -48,32 +48,36 @@ Using RKNN to deploy YOLOv8 involves two steps:
- **Parameter Explanation**:
+ Parameter explanation:
- - ``: Specifies the path to the ONNX model.
- - ``: Specifies the NPU platform name. Options include `rk3562, rk3566, rk3568, rk3576, rk3588, rk1808, rv1109, rv1126`.
- - ` (optional)`: Specifies the data type as `i8` (for int8 quantization) or `fp` (for fp16 quantization). The default is `i8`.
- - ` (optional)`: Specifies the path for saving the RKNN model. By default, it is saved in the same directory as the ONNX model with the filename `yolov8.rknn`.
+ - ``: Path to the ONNX model.
+ - ``: Name of the NPU platform. Options: `rk3562, rk3566, rk3568, rk3576, rk3588, rk1808, rv1109, rv1126`.
+ - `` (optional): Choose `i8` or `fp`. `i8` is for INT8 quantization; `fp` is for FP16 quantization. The default is `i8`.
+ - `` (optional): Path to save the RKNN model. By default it is saved in the same directory as the ONNX model with the filename `yolov8.rknn`.
-- Copy the `yolov8n.rknn` model to the target board.
+ :::tip
+ For RK358X users, set `TARGET_PLATFORM` to `rk3588`.
+ :::
-### YOLOv8 Inference on Board
+- Copy the `yolov8n.rknn` model to the device.
-- Download the `rknn-model-zoo-rk3588` package to obtain the YOLOv8 demo (includes the pre-converted `yolov8.rknn` model):
+### YOLOv8 Inference on Device
+
+- Install the `rknn-model-zoo-rk3588` package to obtain the YOLOv8 demo (which includes the pre-converted `yolov8.rknn` model):
```bash
- sudo apt install rknn-model-zoo-rk3588
+ sudo apt install rknn-model-zoo-rk3588
```
- If using a CLI version, you can download the package manually from the [deb package download page](https://github.com/radxa-pkg/rknn_model_zoo/releases/tag/1.6.0-1).
+ If you are using a CLI-only system, you can download the `rknn-model-zoo-rk3588` deb package from the [release page](https://github.com/radxa-pkg/rknn_model_zoo/releases/tag/1.6.0-1).
-- Run the YOLOv8 example code:
+- Run the YOLOv8 example:
- If using a model converted on the PC, copy it to the board and specify the model path using the `--model_path` parameter:
+ If you are using a model converted on the PC, copy it to the device and specify the model path with the `--model_path` parameter.
@@ -84,19 +88,18 @@ Using RKNN to deploy YOLOv8 involves two steps:
- Example output:
-
```bash
$ sudo python3 yolov8.py --model_path ../model/yolov8.rknn --img_save
- import rknn failed, try to import rknnlite
+ import rknn failed,try to import rknnlite
--> Init runtime environment
- I RKNN: [09:01:01.819] RKNN Runtime Information, librknnrt version: 1.6.0
+ I RKNN: [09:01:01.819] RKNN Runtime Information, librknnrt version: 1.6.0 (9a7b5d24c@2023-12-13T17:31:11)
I RKNN: [09:01:01.819] RKNN Driver Information, version: 0.8.2
- W RKNN: [09:01:01.819] Current driver version: 0.8.2, recommend to upgrade the driver to version >= 0.8.8
- I RKNN: [09:01:01.819] RKNN Model Information, version: 6, target platform: rk3588
- W RKNN: [09:01:01.836] Query dynamic range failed. Ret code: RKNN_ERR_MODEL_INVALID.
+ W RKNN: [09:01:01.819] Current driver version: 0.8.2, recommend to upgrade the driver to the new version: >= 0.8.8
+ I RKNN: [09:01:01.819] RKNN Model Information, version: 6, toolkit version: 1.6.0+81f21f4d(compiler version: 1.6.0 (585b3edcf@2023-12-11T07:56:14)), target: RKNPU v2, target platform: rk3588, framework name: ONNX, framework layout: NCHW, model inference type: static_shape
+ W RKNN: [09:01:01.836] query RKNN_QUERY_INPUT_DYNAMIC_RANGE error, rknn model is static shape type, please export rknn with dynamic_shapes
+ W Query dynamic range failed. Ret code: RKNN_ERR_MODEL_INVALID. (If it is a static shape RKNN model, please ignore the above warning message.)
done
Model-../model/yolov8.rknn is rknn model, starting val
infer 1/1
@@ -107,17 +110,17 @@ Using RKNN to deploy YOLOv8 involves two steps:
person @ (477 225 560 522) 0.856
person @ (79 327 116 513) 0.306
bus @ (95 136 549 449) 0.860
- Detection result saved to ./result/bus.jpg
+ Detection result save to ./result/bus.jpg
```
- **Parameter Explanation**:
+ Parameter explanation:
- - `--model_path`: Specifies the RKNN model path.
- - `--img_folder`: Specifies the folder containing images for inference. Default is `../model`.
- - `--img_save`: Whether to save the inference result images to the `./result` folder. Default is `False`.
+ - `--model_path`: Path to the RKNN model.
+ - `--img_folder`: Folder containing images for inference, default is `../model`.
+ - `--img_save`: Whether to save the inference result images to `./result`. Default is `False`.
-- All inference results are saved in the `./result` folder.
+- All inference results are stored in the `./result` directory.
diff --git a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit2-pc.mdx b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit2-pc.mdx
index c14d28fbe..c734b9565 100644
--- a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit2-pc.mdx
+++ b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit2-pc.mdx
@@ -1,12 +1,12 @@
:::tip
-This document demonstrates how to use the `rknn-toolkit2` to simulate inference of the YOLOv5 segmentation model on an x86 PC without a development board. For the required environment setup, refer to [RKNN Installation](./rknn_install).
+This document demonstrates how to use `rknn-toolkit2` on an x86 PC to perform simulation inference of the YOLOv5 segmentation model without a development board. For the required environment setup, please refer to [RKNN Installation](./rknn-install).
:::
## Prepare the Model
-This example uses a pretrained ONNX model from the [rknn_model_zoo](https://github.com/airockchip/rknn_model_zoo). The model is converted and simulated on the PC.
+This example uses a pre-trained ONNX model from the [rknn_model_zoo](https://github.com/airockchip/rknn_model_zoo) as a case study, converting the model and running simulation inference on the PC.
-- If using `conda`, activate the `rknn` conda environment first:
+- If you are using Conda, first activate the `rknn` Conda environment:
@@ -22,17 +22,17 @@ This example uses a pretrained ONNX model from the [rknn_model_zoo](https://gith
```bash
cd rknn_model_zoo/examples/yolov5_seg/model
- # Download the pretrained yolov5s-seg.onnx model
+ # Download the pre-trained yolov5s-seg.onnx model
bash download_model.sh
```
- :::tip
- If you encounter network issues, visit [this page](https://github.com/airockchip/rknn_model_zoo?tab=readme-ov-file#model-support) to download the model manually and place it in the corresponding folder.
+ :::tip
+ If you encounter network issues, you can visit [this page](https://github.com/airockchip/rknn_model_zoo?tab=readme-ov-file#model-support) to manually download the model and place it in the corresponding folder.
:::
-- Rename the ONNX model file to `rknn` format (**only for PC simulation purposes**):
+- Rename the ONNX model file suffix to `.rknn` (**only for PC-side result simulation**):
@@ -42,14 +42,13 @@ This example uses a pretrained ONNX model from the [rknn_model_zoo](https://gith
-- (Optional) Convert the ONNX model to `yolov5s-seg.rknn` using `rknn-toolkit2`:
+- (Optional) Use `rknn-toolkit2` to convert the model to `yolov5s-seg.rknn`:
```bash
cd rknn_model_zoo/examples/yolov5_seg/python
python3 convert.py
- # Example:
# python3 convert.py ../model/yolov5s-seg.onnx rk3588 i8 ../model/yolov5s-seg.rknn
```
@@ -58,13 +57,15 @@ This example uses a pretrained ONNX model from the [rknn_model_zoo](https://gith
Parameter explanation:
- ``: Path to the ONNX model.
- - ``: Target NPU platform, such as `rk3562`, `rk3566`, `rk3568`, `rk3576`, `rk3588`, `rk1808`, `rv1109`, `rv1126`.
- - ``: Quantization type (`i8` for int8, `fp` for fp16). Default is `i8`.
- - ``: Path to save the RKNN model. Defaults to the same directory as the ONNX model with the filename `yolov5s-seg.rknn`.
+ - ``: Name of the NPU platform. Options: `rk3562, rk3566, rk3568, rk3576, rk3588, rk1808, rv1109, rv1126`.
+ - ``: Choose `i8` or `fp`. `i8` is for INT8 quantization; `fp` is for FP16 quantization. The default is `i8`.
+ - ``: Path to save the RKNN model. By default it is saved in the same directory as the ONNX model with the filename `yolov5s-seg.rknn`.
-## ONNX Model Inference on PC
+ :::tip
+ For RK358X users, set `TARGET_PLATFORM` to `rk3588`.
+ :::
-Run the inference script:
+## ONNX Model Inference on PC
@@ -74,9 +75,7 @@ python3 yolov5_seg.py --model_path ../model/yolov5s-seg.onnx --img_show
-Example output:
-
-
+
```bash
person @ (212 242 285 510) 0.871
@@ -95,7 +94,7 @@ person @ (80 328 125 517) 0.470
## RKNN Model Simulation Inference on PC
-- Install required dependencies via `pip3`:
+- Install the required dependencies with `pip3`:
@@ -105,71 +104,69 @@ person @ (80 328 125 517) 0.470
-- Modify the `rknn_model_zoo/py_utils/rknn_executor.py` file as follows (**make a backup of the original file**):
+- Run the simulation inference script:
-
+ - Modify `rknn_model_zoo/py_utils/rknn_executor.py` to the following code (**be sure to back up the original file**):
- ```python
- from rknn.api import RKNN
+
- class RKNN_model_container():
- def __init__(self, model_path, target=None, device_id=None) -> None:
- rknn = RKNN()
- DATASET_PATH = '../../../datasets/COCO/coco_subset_20.txt'
- onnx_model = model_path[:-4] + 'onnx'
- rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform=target)
- rknn.load_onnx(model=onnx_model)
- rknn.build(do_quantization=True, dataset=DATASET_PATH)
- rknn.init_runtime()
- self.rknn = rknn
+ ```python
+ from rknn.api import RKNN
- def run(self, inputs):
- if isinstance(inputs, list) or isinstance(inputs, tuple):
- pass
- else:
- inputs = [inputs]
+ class RKNN_model_container():
+ def __init__(self, model_path, target=None, device_id=None) -> None:
+ rknn = RKNN()
+ DATASET_PATH = '../../../datasets/COCO/coco_subset_20.txt'
+ onnx_model = model_path[:-4] + 'onnx'
+ rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform=target)
+ rknn.load_onnx(model=onnx_model)
+ rknn.build(do_quantization=True, dataset=DATASET_PATH)
+ rknn.init_runtime()
+ self.rknn = rknn
- result = self.rknn.inference(inputs=inputs)
- return result
+ def run(self, inputs):
+ if isinstance(inputs, list) or isinstance(inputs, tuple):
+ pass
+ else:
+ inputs = [inputs]
- def release(self):
- self.rknn.release()
- self.rknn = None
- ```
+ result = self.rknn.inference(inputs=inputs)
+ return result
-
+ def release(self):
+ self.rknn.release()
+ self.rknn = None
+ ```
-- Run the simulation inference script:
+
-
+ - Run the simulation inference script:
- ```bash
- python3 yolov5_seg.py --target --model_path --img_show
- # Example:
- # python3 yolov5_seg.py --target rk3588 --model_path ../model/yolov5s-seg.rknn --img_show
- ```
+
-
+ ```bash
+ python3 yolov5_seg.py --target --model_path --img_show
+ # python3 yolov5_seg.py --target rk3588 --model_path ../model/yolov5s-seg.rknn --img_show
+ ```
- Parameters:
+
- - `--target`: Specifies the NPU platform for simulation. Options include `rk3562`, `rk3566`, `rk3568`, `rk3576`, `rk3588`, `rk1808`, `rv1109`, `rv1126`.
- - `--model_path`: Path to the RKNN model to simulate.
+ `--target`: Name of the NPU platform to simulate. Options: `rk3562, rk3566, rk3568, rk3576, rk3588, rk1808, rv1109, rv1126`.
-Example output:
+ `--model_path`: Path to the RKNN model to be simulated.
-
+
-```bash
-person @ (213 239 284 516) 0.882
-person @ (109 240 224 535) 0.869
-person @ (473 231 560 523) 0.845
-bus @ (97 136 548 459) 0.821
-person @ (80 328 124 519) 0.499
-```
+ ```bash
+ person @ (213 239 284 516) 0.882
+ person @ (109 240 224 535) 0.869
+ person @ (473 231 560 523) 0.845
+ bus @ (97 136 548 459) 0.821
+ person @ (80 328 124 519) 0.499
+ ```
-
+
-- The simulation inference result (only simulates NPU computation; actual performance and accuracy depend on inference on the target board):
+ - Simulation inference result (the simulator only simulates NPU computation; actual performance and accuracy are determined by inference on the target board):
diff --git a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-ultralytics.mdx b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-ultralytics.mdx
index 62cb0ebdd..c3f7cbe5c 100644
--- a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-ultralytics.mdx
+++ b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-ultralytics.mdx
@@ -1,12 +1,12 @@
:::tip
-This document demonstrates how to perform inference for the YOLOv11 object detection model on RK3588/356X. For the required environment setup, please refer to [RKNN Installation](./rknn_install).
+This document demonstrates how to run inference for the YOLOv11 object detection model on RK3588/356X. For the required environment setup, please refer to [RKNN Installation](./rknn-install).
:::
Currently, the [Ultralytics](https://docs.ultralytics.com/integrations/rockchip-rknn/) library officially supports the RKNN platform. Users of RK3588/356X products can directly use the `ultralytics` library for YOLOv11 model conversion and deployment.
## Model Conversion on PC
-**Radxa has provided a pre-converted `yolov11n.rknn` model. Users can skip the PC-side model conversion section and directly refer to [YOLOv11 Inference on Board](#yolov11-inference-on-board).**
+**Radxa provides a pre-converted `yolov11n.rknn` model. Users can skip the PC-side model conversion section and directly refer to [YOLOv11 Inference on Device](#yolov11-inference-on-device).**
- Install the latest version of Ultralytics:
@@ -19,6 +19,9 @@ Currently, the [Ultralytics](https://docs.ultralytics.com/integrations/rockchip-
- Use Ultralytics to export the YOLOv11 model in RKNN format:
+ :::tip
+ For RK358X users, set `TARGET_PLATFORM` (or `name`) to `rk3588`.
+ :::
@@ -39,7 +42,7 @@ Currently, the [Ultralytics](https://docs.ultralytics.com/integrations/rockchip-
- ```bash
+ ```python
from ultralytics import YOLO
# Load the YOLOv11 model
@@ -60,24 +63,26 @@ Currently, the [Ultralytics](https://docs.ultralytics.com/integrations/rockchip-
- Copy the **yolo11n_rknn_model** directory to the target device.
-## YOLOv11 Inference on Board
+## YOLOv11 Inference on Device
:::tip
-For users of RK356X products, you need to enable the NPU in the terminal using **rsetup** before using the NPU: `sudo rsetup -> Overlays -> Manage overlays -> Enable NPU`, then restart the system.
+For RK356X products, you need to enable the NPU in the terminal using **rsetup** before using the NPU:
+`sudo rsetup -> Overlays -> Manage overlays -> Enable NPU`, then reboot the system.
-If there is no `Enable NPU` option in the overlays options, please update the system via: `sudo rsetup -> system -> System Update`, restart, and execute the above steps to enable the NPU.
+If there is no `Enable NPU` option in `Overlays`, please run: `sudo rsetup -> System -> System Update` to upgrade the system, reboot, and then repeat the above steps to enable the NPU.
:::
-- (Optional) Download the YOLOv11n RKNN model provided by Radxa:
- | Platform | Download Link |
- | -------- | ------------------------------------------------------------ |
- | rk3566 | [yolo11n_3566_rknn_model](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov11/yolo11n_3566_rknn_model.zip) |
- | rk3568 | [yolo11n_3568_rknn_model](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov11/yolo11n_3568_rknn_model.zip) |
- | rk3588 | [yolo11n_3588_rknn_model](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov11/yolo11n_3588_rknn_model.zip) |
+- (Optional) Download the YOLOv11n RKNN models prepared by Radxa:
+
+ | Platform | Download Link |
+ | -------- | ------------------------------------------------------------------------------------------------------------------------------- |
+ | rk3566 | [yolo11n_3566_rknn_model](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov11/yolo11n_3566_rknn_model.zip) |
+ | rk3568 | [yolo11n_3568_rknn_model](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov11/yolo11n_3568_rknn_model.zip) |
+ | rk3588 | [yolo11n_3588_rknn_model](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov11/yolo11n_3588_rknn_model.zip) |
- Install the latest version of Ultralytics in a virtual environment:
- For instructions on virtual environments, refer to [Python Virtual Environment Usage](venv_usage).
+ For instructions on virtual environments, refer to [Python Virtual Environment Usage](../venv-usage).
@@ -87,7 +92,7 @@ If there is no `Enable NPU` option in the overlays options, please update the sy
-- Run inference on the board:
+- Run inference on the device:
@@ -101,6 +106,7 @@ If there is no `Enable NPU` option in the overlays options, please update the sy
+
@@ -122,7 +128,7 @@ If there is no `Enable NPU` option in the overlays options, please update the sy
The results are saved in `runs/detect/predict`.
-
+
## Additional Usage Details
diff --git a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_venv_usage.mdx b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_venv_usage.mdx
index e932883fa..a2b67d99c 100644
--- a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_venv_usage.mdx
+++ b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_venv_usage.mdx
@@ -1,4 +1,4 @@
-In the Radxa OS system, installing Python libraries via `pip3` is restricted by the system. Users can use a virtual environment to isolate the virtual environment from the system environment.
+On Radxa OS, installing Python libraries system-wide with `pip3` is restricted by the system. You can use a Python virtual environment to isolate project dependencies from the system environment.
```bash
error: externally-managed-environment
@@ -9,21 +9,19 @@ error: externally-managed-environment
install.
```
-### Install Virtual Environment
-
-Using Python 3.11 as an example:
+### Install the virtual environment tools
```bash
-sudo apt install python3.11-venv
+sudo apt install python3-venv
```
-### Create a Virtual Environment
+### Create a virtual environment
```bash
python3 -m venv .venv
```
-### Activate the Virtual Environment
+### Activate the virtual environment
```bash
source .venv/bin/activate
@@ -35,7 +33,7 @@ source .venv/bin/activate
pip3 install --upgrade pip
```
-### Deactivate the Virtual Environment
+### Deactivate the virtual environment
```bash
deactivate