Experiments & demos with LeRobot Aloha robot arm
- Operating System: Ubuntu 20.04 (via Windows WSL)
- GPU: NVIDIA GeForce RTX 4060, Driver Version 576.52, CUDA 12.9 ใTrain in RTX4090D 24G or AutoDL platformใ
- Hardware: Logitech C920 Webcam, Xuanya hardware robotic arm based on Aloha ๏ผhttps://github.com/Xuanya-Robotics/lerobot๏ผ
- Simulation & real-world control
- Imitation learning & reinforcement learning
- Dataset collection & analysis
- ไป็ปไฝ ไฝฟ็จ็ๆๅๅคดๅๅทใๅ่พจ็ใๅฎ่ฃ ๆนๅผ
- ็คบไพ๏ผ
# ๆฅ็ๅฏ็จๆๅๅคด ls /dev/video* # ๅฏๅจๆๅๅคด้้่ๆฌ python scripts/camera_setup.py
git clone https://github.com/yourusername/aloha-project.git
cd aloha-project
# install dependencies etc.
````markdown
#
This section documents the training environment setup and execution on the **AutoDL platform**, including conda environment setup, code installation, Hugging Face integration, and training script usage.
---
### ๐ง Environment Setup
1. **Initialize Conda**
```bash
conda env list
conda activate base
conda initRestart the terminal after initializing conda.
- Create and activate environment
conda create -y -n lerobot python=3.10
conda activate lerobot- Enable Network Turbo (AutoDL Specific)
source /etc/network_turbo
# Output: Successfully enabled. Note: Academic use only, no stability guaranteed.- Clone project repositories
git clone https://github.com/Xuanya-Robotics/lerobot.git
cd lerobot
git clone https://github.com/Xuanya-Robotics/Alicia_duo_sdk.git Alicia_duo_sdk- Install dependencies
conda install ffmpeg -c conda-forge
pip install -e .- Set Git Credential Helper
git config --global credential.helper store- Login using token
hf auth login --token <your_token>
# Example:
# hf auth login --token hf_..................... After login, the token will be saved locally. You may add
--add-to-git-credentialif needed.
- Set Hugging Face username environment variable
HF_USER=$(hf auth whoami | head -n 1) && echo $HF_USER
# Example output: Enstar07python lerobot/scripts/train.py \
--dataset.repo_id Enstar07/demo_dataset3 \
--policy.type act \
--output_dir outputs/train/act_alicia_duo_model_final \
--job_name alicia_duo_act_training_final \
--policy.device cuda \
--batch_size 32 \
--steps 2000 \
--save_freq 500 \
--eval_freq 500 \
--log_freq 100
โ ๏ธ Cloud-based dataset training may suffer from network delays. For large datasets, consider using FileZilla to upload local data directly to AutoDL.
python lerobot/scripts/train.py \
--dataset.repo_id /root/lerobot/data/zhuiPick3 \
--policy.type act \
--output_dir outputs/train/act_alicia_duo_model_final \
--job_name alicia_duo_act_training_final \
--policy.device cuda \
--batch_size 32 \
--steps 2000 \
--save_freq 500 \
--eval_freq 500 \
--log_freq 100python lerobot/scripts/train.py \
--dataset.repo_id /root/lerobot/data/zhuiPick3 \
--policy.type act \
--output_dir outputs/train/act_alicia_duo_model_final \
--job_name alicia_duo_act_training_final \
--policy.device cuda \
--batch_size 64 \
--steps 20000 \
--save_freq 1000 \
--eval_freq 1000 \
--log_freq 500- After training completes, use FileZilla to download the trained model directory to your local system.
- Update the deployment script path:
# In dp_inference.py
ckpt_path = "/home/zexuan/Robot/xuanArm/system2/lerobot/outputs/train/filezilla/020000/pretrained_model"- Then run deployment:
cd examples/
python dp_inference.py
---
## ๐งช AutoDL Training Notes