From 701bf785da47b9fdf3bc5df130490652931bca22 Mon Sep 17 00:00:00 2001 From: Minghuan Liu Date: Thu, 15 Aug 2024 18:20:59 +0800 Subject: [PATCH 01/27] add ack --- README.md | 2 ++ high-level/README.md | 24 +++++++++++++++++------- high-level/data/cfg/b1z1_pickmulti.yaml | 2 +- 3 files changed, 20 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 561e00a..ff8175b 100644 --- a/README.md +++ b/README.md @@ -64,6 +64,8 @@ Detailed code structures can be found in these directories. - [rsl_rl](https://github.com/leggedrobotics/rsl_rl) - [skrl](https://github.com/Toni-SM/skrl) +The low-level training also refers a lot to [DeepWBC](https://github.com/MarkFzp/Deep-Whole-Body-Control). + ## Codebase Contributions - [Minghuan Liu](https://minghuanliu.com) made efforts on improving the training efficiency, reward engineering, filling sim2real gaps, and reach expected behaviors, while cleaning and integrating the whole codebase for simplicity. diff --git a/high-level/README.md b/high-level/README.md index 011d09b..4219131 100644 --- a/high-level/README.md +++ b/high-level/README.md @@ -3,7 +3,10 @@ This code base only includes the task of picking multiple objects. ## Code structure + `data` contains assets and config files. +assets: robot model +config: RL Environment `envs` contains environment related codes. @@ -15,24 +18,25 @@ This code base only includes the task of picking multiple objects. ## Train -1. Environments: +1. Environments: **Picking multiple objects**: [b1_z1pickmulti.py](./envs/b1_z1pickmulti.py) is for walking and picking policy, supporting floating base (fixed and varied body heights). - 2. To train the state-based teacher, using [train_multistate.py](./train_multistate.py) with the config [b1z1_pickmulti.yaml](./data/cfg/b1z1_pickmulti.yaml) (remember to determine the pre-trained low-level policy path in the config file): + ```bash python train_multistate.py --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "some descriptions" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick ``` + Arguments explanation: -`--task` should be the full name of the environment, with every first letter of each word capitalized. +`--task` should be the full name of the environment, with every first letter of each word capitalized. -`--timesteps` total training steps. +`--timesteps` total training steps. `--experiment_dir` is the name of the directory where the running is saved. -`--wandb`, then the training will be logged to wandb. You can omit this argument if you don't want to use wandb or when debugging. +`--wandb`, then the training will be logged to wandb. You can omit this argument if you don't want to use wandb or when debugging. -`--wandb_project` is the name of the project in wandb. +`--wandb_project` is the name of the project in wandb. `--wandb_name` is the name of the run in wandb, which is also the name of this run in experiment_dir. @@ -47,26 +51,32 @@ Arguments explanation: `--stop_pick` enforces the robot to stop when the gripper is closing. For playing the teacher policy, using [play_multistate.py](./play_multistate.py): + ```bash python play_multistate.py --task B1Z1PickMulti --checkpoint "(specify the path)" # --(same arguments as training) ``` -It should be a maximum of 60000 timesteps for successful teacher policy training. +It should be a maximum of 60000 timesteps for successful teacher policy training. 3. To train the vision-based student policy, use [train_multi_bc_deter.py](./train_multi_bc_deter.py) with the config [b1z1_pickmulti.yaml](./data/cfg/b1z1_pickmulti.yaml) + ```bash python train_multi_bc_deter.py --headless --task B1Z1PickMulti --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --experiment_dir "b1-pick-multi-stu" --wandb --wandb_project "b1-pick-multi-stu" --wandb_name "checkpoint dir path" --teacher_ckpt_path "teacher checkpoint path" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick ``` + Arguments are similar to those above. For playing the trained student policy, using [play_multi_bc_deter.py](./play_multi_bc_deter.py): + ```bash python play_multi_bc_deter.py --task B1Z1PickMulti --checkpoint "(specify the path)" # --(same arguments as training) ``` + If you don't specify `--num_envs`, it will use 34 by default (only for this script). It should be a maximum of 60000 timesteps for successful student policy training. ## Others + [test_pointcloud.py](./test_pointcloud.py) can be use for checking the pointcloud of the objects. [train_multistate_asym.py](./train_multistate_asym.py) is a try of using asymetric PPO for training the high-level policy (i.e, vision-based policy and privilaged value function), it is training inefficient and is not comparable to the teacher-student as it cannot parallel too many environments due to the depth images consumption. diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index 8f609fd..bb68a34 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -17,7 +17,7 @@ env: holdSteps: 25 lastCommands: False - low_policy_path: "/data/mhliu/visual_wholebody/high-level/data/low_policy/publiccheckrollrew_42000.pt" # "data/low_policy/jan17-frontfeet_terrainflat_linrew2_freq2_gaitstopzero_penelizeang_roll_fixbug_35600.pt" + low_policy_path: "/data/mhliu/visual_wholebody/high-level/data/s.pt" # "data/low_policy/jan17-frontfeet_terrainflat_linrew2_freq2_gaitstopzero_penelizeang_roll_fixbug_35600.pt" asset: assetRoot: "data/asset" From c131d9715a3e75e8778a25a058728cc4ae4e3dc0 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Wed, 13 Aug 2025 13:31:30 +0200 Subject: [PATCH 02/27] impletement low policy --- low-level/README.md | 6 ++++++ low-level/legged_gym/utils/helpers.py | 2 +- low-level/record.txt | 14 ++++++++++++++ 3 files changed, 21 insertions(+), 1 deletion(-) create mode 100644 low-level/record.txt diff --git a/low-level/README.md b/low-level/README.md index 9eff31d..581a1d5 100644 --- a/low-level/README.md +++ b/low-level/README.md @@ -1,6 +1,7 @@ # Training a universal low-level policy ## Code structure + `legged_gym/envs` contains environment-related codes. `legged_gym/scripts` contains train and test scripts. @@ -13,6 +14,7 @@ The environment related code is `legged_gym/legged_gym/envs/manip_loco/manip_loc cd legged_gym/scripts python train.py --headless --exptid SOME_YOUR_DESCRIPTION --proj_name b1z1-low --task b1z1 --sim_device cuda:0 --rl_device cuda:0 --observe_gait_commands ``` + - `--debug` disables wandb and set a small number of envs for faster execution. - `--headless` disables rendering, typically used when you train model. - `--proj_name` the folder containing all your logs and wandb project name. `manip-loco` is default. @@ -21,12 +23,16 @@ python train.py --headless --exptid SOME_YOUR_DESCRIPTION --proj_name b1z1-low - Check `legged_gym/legged_gym/utils/helpers.py` for all command line args. ## Play + Only need to specify `--exptid`. The parser will automatically find corresponding runs. + ```bash cd legged_gym/scripts python play.py --exptid SOME_YOUR_DESCRIPTION --task b1z1 --proj_name b1z1-low --checkpoint 64000 --observe_gait_commands ``` + Use `--sim_device cpu --rl_device cpu` in case not enough GPU memory. ## Suggestions + To choose a good low-level policy that can be further used for training the high-level policy, we suggest you deploy the low-level policy first, and see if it goes well before training a high-level policy. diff --git a/low-level/legged_gym/utils/helpers.py b/low-level/legged_gym/utils/helpers.py index 61e335f..6285c8a 100644 --- a/low-level/legged_gym/utils/helpers.py +++ b/low-level/legged_gym/utils/helpers.py @@ -30,11 +30,11 @@ import os import copy -import torch import numpy as np import random from isaacgym import gymapi from isaacgym import gymutil +import torch from legged_gym import LEGGED_GYM_ROOT_DIR, LEGGED_GYM_ENVS_DIR diff --git a/low-level/record.txt b/low-level/record.txt new file mode 100644 index 0000000..6d4cc06 --- /dev/null +++ b/low-level/record.txt @@ -0,0 +1,14 @@ +Observation (obs): + Joint positions, velocities + Contact sensors or gait phase info + Command targets (if --observe_gait_commands) + Possibly noise from _get_noise_scale_vec() + + + +Reward: +Tracks performance: velocity tracking, foot clearance, energy cost +Done condition: falls, too far from goal, etc. + + + From 8b57db5654e1a536c856ffc2eb19306fb38a7e89 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Wed, 13 Aug 2025 13:31:30 +0200 Subject: [PATCH 03/27] impletement low policy --- high-level/data/cfg/b1z1_pickmulti.yaml | 2 +- low-level/README.md | 6 ++++++ low-level/legged_gym/utils/helpers.py | 2 +- low-level/record.txt | 14 ++++++++++++++ 4 files changed, 22 insertions(+), 2 deletions(-) create mode 100644 low-level/record.txt diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index bb68a34..dbe7528 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -17,7 +17,7 @@ env: holdSteps: 25 lastCommands: False - low_policy_path: "/data/mhliu/visual_wholebody/high-level/data/s.pt" # "data/low_policy/jan17-frontfeet_terrainflat_linrew2_freq2_gaitstopzero_penelizeang_roll_fixbug_35600.pt" + low_policy_path: "/home/wang/Desktop/visual_wholebody/low-level/logs/b1z1-low/gait_tuning_v1/model_1200.pt" # "data/low_policy/jan17-frontfeet_terrainflat_linrew2_freq2_gaitstopzero_penelizeang_roll_fixbug_35600.pt" asset: assetRoot: "data/asset" diff --git a/low-level/README.md b/low-level/README.md index 9eff31d..581a1d5 100644 --- a/low-level/README.md +++ b/low-level/README.md @@ -1,6 +1,7 @@ # Training a universal low-level policy ## Code structure + `legged_gym/envs` contains environment-related codes. `legged_gym/scripts` contains train and test scripts. @@ -13,6 +14,7 @@ The environment related code is `legged_gym/legged_gym/envs/manip_loco/manip_loc cd legged_gym/scripts python train.py --headless --exptid SOME_YOUR_DESCRIPTION --proj_name b1z1-low --task b1z1 --sim_device cuda:0 --rl_device cuda:0 --observe_gait_commands ``` + - `--debug` disables wandb and set a small number of envs for faster execution. - `--headless` disables rendering, typically used when you train model. - `--proj_name` the folder containing all your logs and wandb project name. `manip-loco` is default. @@ -21,12 +23,16 @@ python train.py --headless --exptid SOME_YOUR_DESCRIPTION --proj_name b1z1-low - Check `legged_gym/legged_gym/utils/helpers.py` for all command line args. ## Play + Only need to specify `--exptid`. The parser will automatically find corresponding runs. + ```bash cd legged_gym/scripts python play.py --exptid SOME_YOUR_DESCRIPTION --task b1z1 --proj_name b1z1-low --checkpoint 64000 --observe_gait_commands ``` + Use `--sim_device cpu --rl_device cpu` in case not enough GPU memory. ## Suggestions + To choose a good low-level policy that can be further used for training the high-level policy, we suggest you deploy the low-level policy first, and see if it goes well before training a high-level policy. diff --git a/low-level/legged_gym/utils/helpers.py b/low-level/legged_gym/utils/helpers.py index 61e335f..6285c8a 100644 --- a/low-level/legged_gym/utils/helpers.py +++ b/low-level/legged_gym/utils/helpers.py @@ -30,11 +30,11 @@ import os import copy -import torch import numpy as np import random from isaacgym import gymapi from isaacgym import gymutil +import torch from legged_gym import LEGGED_GYM_ROOT_DIR, LEGGED_GYM_ENVS_DIR diff --git a/low-level/record.txt b/low-level/record.txt new file mode 100644 index 0000000..6d4cc06 --- /dev/null +++ b/low-level/record.txt @@ -0,0 +1,14 @@ +Observation (obs): + Joint positions, velocities + Contact sensors or gait phase info + Command targets (if --observe_gait_commands) + Possibly noise from _get_noise_scale_vec() + + + +Reward: +Tracks performance: velocity tracking, foot clearance, energy cost +Done condition: falls, too far from goal, etc. + + + From f1d6f4187f64b7819ed954dfe6c7cbbb30fe1dd6 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Sun, 17 Aug 2025 11:46:30 +0200 Subject: [PATCH 04/27] update --- high-level/README.md | 7 +++++++ high-level/train_multistate.py | 1 + low-level/legged_gym/scripts/train.py | 2 +- low-level/record.txt | 6 ++++++ 4 files changed, 15 insertions(+), 1 deletion(-) diff --git a/high-level/README.md b/high-level/README.md index 4219131..b83e4c3 100644 --- a/high-level/README.md +++ b/high-level/README.md @@ -80,3 +80,10 @@ It should be a maximum of 60000 timesteps for successful student policy training [test_pointcloud.py](./test_pointcloud.py) can be use for checking the pointcloud of the objects. [train_multistate_asym.py](./train_multistate_asym.py) is a try of using asymetric PPO for training the high-level policy (i.e, vision-based policy and privilaged value function), it is training inefficient and is not comparable to the teacher-student as it cannot parallel too many environments due to the depth images consumption. + + + + +command: + +python train_multistate.py --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "some descriptions" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick diff --git a/high-level/train_multistate.py b/high-level/train_multistate.py index 4061c92..7763cca 100644 --- a/high-level/train_multistate.py +++ b/high-level/train_multistate.py @@ -226,3 +226,4 @@ def get_trainer(is_eval=False): trainer = get_trainer() trainer.train() + diff --git a/low-level/legged_gym/scripts/train.py b/low-level/legged_gym/scripts/train.py index 36a7632..2ec51ab 100644 --- a/low-level/legged_gym/scripts/train.py +++ b/low-level/legged_gym/scripts/train.py @@ -49,7 +49,7 @@ def train(args): mode = "disabled" args.rows = 6 args.cols = 2 - args.num_envs = 128 + args.num_envs = 1 else: mode = "online" wandb.init(project=args.proj_name, name=args.exptid, mode=mode, dir=LEGGED_GYM_ENVS_DIR +"/logs") diff --git a/low-level/record.txt b/low-level/record.txt index 6d4cc06..34f696f 100644 --- a/low-level/record.txt +++ b/low-level/record.txt @@ -10,5 +10,11 @@ Reward: Tracks performance: velocity tracking, foot clearance, energy cost Done condition: falls, too far from goal, etc. +Command: +python scripts/train.py --headless --exptid gait_tunning_v2 --proj_name b1z1-low --task b1z1 --sim_device cuda:0 --rl_device cuda:0 --observe_gait_commands --debug + + +python play.py --exptid gait_tunning_v2 --task b1z1 --proj_name b1z1-low --checkpoint 13000 --observe_gait_commands + From cd078346ac29d8b2f7cb01b5391d3dd8053f0e4b Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Sun, 17 Aug 2025 14:32:40 +0200 Subject: [PATCH 05/27] update --- high-level/README.md | 2 +- high-level/data/cfg/b1z1_pickmulti.yaml | 2 +- low-level/README.md | 6 ++++++ 3 files changed, 8 insertions(+), 2 deletions(-) diff --git a/high-level/README.md b/high-level/README.md index b83e4c3..f6aa85a 100644 --- a/high-level/README.md +++ b/high-level/README.md @@ -86,4 +86,4 @@ It should be a maximum of 60000 timesteps for successful student policy training command: -python train_multistate.py --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "some descriptions" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick +python train_multistate.py --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "b1-pick-multi-teacher" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index dbe7528..5a656fc 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -17,7 +17,7 @@ env: holdSteps: 25 lastCommands: False - low_policy_path: "/home/wang/Desktop/visual_wholebody/low-level/logs/b1z1-low/gait_tuning_v1/model_1200.pt" # "data/low_policy/jan17-frontfeet_terrainflat_linrew2_freq2_gaitstopzero_penelizeang_roll_fixbug_35600.pt" + low_policy_path: "/home/wang/Desktop/visual_wholebody/low-level/logs/b1z1-low/gait_tuning_v2/model_13000.pt" # "data/low_policy/jan17-frontfeet_terrainflat_linrew2_freq2_gaitstopzero_penelizeang_roll_fixbug_35600.pt" asset: assetRoot: "data/asset" diff --git a/low-level/README.md b/low-level/README.md index 581a1d5..367b768 100644 --- a/low-level/README.md +++ b/low-level/README.md @@ -36,3 +36,9 @@ Use `--sim_device cpu --rl_device cpu` in case not enough GPU memory. ## Suggestions To choose a good low-level policy that can be further used for training the high-level policy, we suggest you deploy the low-level policy first, and see if it goes well before training a high-level policy. + + + +command line + +python play.py --exptid gait_tuning_v2 --task b1z1 --proj_name b1z1-low --checkpoint 13000 --observe_gait_commands From 479a759466f665ab62e83e9f4a9bf4688123eb02 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Tue, 19 Aug 2025 00:01:49 +0200 Subject: [PATCH 06/27] update --- high-level/README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/high-level/README.md b/high-level/README.md index f6aa85a..a1b1a76 100644 --- a/high-level/README.md +++ b/high-level/README.md @@ -81,9 +81,8 @@ It should be a maximum of 60000 timesteps for successful student policy training [train_multistate_asym.py](./train_multistate_asym.py) is a try of using asymetric PPO for training the high-level policy (i.e, vision-based policy and privilaged value function), it is training inefficient and is not comparable to the teacher-student as it cannot parallel too many environments due to the depth images consumption. - - - command: python train_multistate.py --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "b1-pick-multi-teacher" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick + +python play_multistate.py --task B1Z1PickMulti --checkpoint /home/wang/Desktop/visual_wholebody/high-level/b1-pick-multi-teacher/b1-pick-multi-teacher/checkpoints/agent_50000.pt --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick --rl_device "cuda:0" --sim_device "cuda:0" From 9e92554de1d93111f0ccf8ef126d4dd2366f7e26 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Tue, 19 Aug 2025 00:11:48 +0200 Subject: [PATCH 07/27] contact_offset: 0.04 --- high-level/data/cfg/b1z1_pickmulti.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index 5a656fc..a642efb 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -216,7 +216,7 @@ sim: solver_type: 1 num_position_iterations: 8 num_velocity_iterations: 0 - contact_offset: 0.02 + contact_offset: 0.04 rest_offset: 0.0 bounce_threshold_velocity: 0.2 max_depenetration_velocity: 50.0 From aac0bdd8774cd7dae4655f34a66046545e343b63 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Tue, 19 Aug 2025 00:30:11 +0200 Subject: [PATCH 08/27] 1024 envs --- high-level/data/cfg/b1z1_pickmulti.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index a642efb..4ee6d6a 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -1,5 +1,5 @@ env: - numEnvs: 10240 + numEnvs: 1024 numAgents: 1 envSpacing: 5.0 enableDebugVis: False From 04f4ece63dfe3de1cdf51bb8d30a55497f37b36a Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Wed, 20 Aug 2025 13:22:02 +0200 Subject: [PATCH 09/27] update --- high-level/README.md | 1 - high-level/data/cfg/b1z1_pickmulti.yaml | 5 +++-- high-level/envs/b1z1_pickmulti.py | 2 +- low-level/README.md | 2 -- 4 files changed, 4 insertions(+), 6 deletions(-) diff --git a/high-level/README.md b/high-level/README.md index a1b1a76..5f58d7a 100644 --- a/high-level/README.md +++ b/high-level/README.md @@ -83,6 +83,5 @@ It should be a maximum of 60000 timesteps for successful student policy training command: -python train_multistate.py --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "b1-pick-multi-teacher" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick python play_multistate.py --task B1Z1PickMulti --checkpoint /home/wang/Desktop/visual_wholebody/high-level/b1-pick-multi-teacher/b1-pick-multi-teacher/checkpoints/agent_50000.pt --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick --rl_device "cuda:0" --sim_device "cuda:0" diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index 4ee6d6a..6ef7be1 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -17,8 +17,9 @@ env: holdSteps: 25 lastCommands: False - low_policy_path: "/home/wang/Desktop/visual_wholebody/low-level/logs/b1z1-low/gait_tuning_v2/model_13000.pt" # "data/low_policy/jan17-frontfeet_terrainflat_linrew2_freq2_gaitstopzero_penelizeang_roll_fixbug_35600.pt" - + low_policy_path: "/home/wang/Desktop/visual_wholebody/low-level/logs/b1z1-low/gait_reference/model_37000.pt" + # "data/low_policy/jan17-frontfeet_terrainflat_linrew2_freq2_gaitstopzero_penelizeang_roll_fixbug_35600.pt" + # "/home/wang/Desktop/visual_wholebody/low-level/logs/b1z1-low/gait_tuning_v2/model_13000.pt" asset: assetRoot: "data/asset" assetFileRobot: "b1z1-col/urdf/b1z1.urdf" diff --git a/high-level/envs/b1z1_pickmulti.py b/high-level/envs/b1z1_pickmulti.py index ab8b047..4c89e55 100644 --- a/high-level/envs/b1z1_pickmulti.py +++ b/high-level/envs/b1z1_pickmulti.py @@ -257,7 +257,7 @@ def _reset_table(self, env_ids): else: rand_heights = torch.ones((len(env_ids), 1), device=self.device, dtype=torch.float)*self.table_heights_fix - self.table_dimz / 2 - self._table_root_states[env_ids, 2] = rand_heights.squeeze(1) - self.table_dimz / 2.0 + self._table_root_states[env_ids, 2] = self.table_dimz / 2.0 self.table_heights[env_ids] = self._table_root_states[env_ids, 2] + self.table_dimz / 2.0 def _reset_actors(self, env_ids): diff --git a/low-level/README.md b/low-level/README.md index 367b768..b33f132 100644 --- a/low-level/README.md +++ b/low-level/README.md @@ -37,8 +37,6 @@ Use `--sim_device cpu --rl_device cpu` in case not enough GPU memory. To choose a good low-level policy that can be further used for training the high-level policy, we suggest you deploy the low-level policy first, and see if it goes well before training a high-level policy. - - command line python play.py --exptid gait_tuning_v2 --task b1z1 --proj_name b1z1-low --checkpoint 13000 --observe_gait_commands From bdfe35f3a40204c99b9ffd57d3596a38f7de589a Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Fri, 29 Aug 2025 14:52:14 +0200 Subject: [PATCH 10/27] update --- high-level/README.md | 3 --- high-level/data/cfg/b1z1_pickmulti.yaml | 4 ++-- high-level/envs/b1z1_pickmulti.py | 2 +- high-level/train_multistate.py | 7 +++++-- low-level/legged_gym/envs/base/legged_robot.py | 2 +- 5 files changed, 9 insertions(+), 9 deletions(-) diff --git a/high-level/README.md b/high-level/README.md index 5f58d7a..aeb3433 100644 --- a/high-level/README.md +++ b/high-level/README.md @@ -82,6 +82,3 @@ It should be a maximum of 60000 timesteps for successful student policy training [train_multistate_asym.py](./train_multistate_asym.py) is a try of using asymetric PPO for training the high-level policy (i.e, vision-based policy and privilaged value function), it is training inefficient and is not comparable to the teacher-student as it cannot parallel too many environments due to the depth images consumption. command: - - -python play_multistate.py --task B1Z1PickMulti --checkpoint /home/wang/Desktop/visual_wholebody/high-level/b1-pick-multi-teacher/b1-pick-multi-teacher/checkpoints/agent_50000.pt --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick --rl_device "cuda:0" --sim_device "cuda:0" diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index 6ef7be1..b955458 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -1,5 +1,5 @@ env: - numEnvs: 1024 + numEnvs: 2048 numAgents: 1 envSpacing: 5.0 enableDebugVis: False @@ -217,7 +217,7 @@ sim: solver_type: 1 num_position_iterations: 8 num_velocity_iterations: 0 - contact_offset: 0.04 + contact_offset: 0.08 rest_offset: 0.0 bounce_threshold_velocity: 0.2 max_depenetration_velocity: 50.0 diff --git a/high-level/envs/b1z1_pickmulti.py b/high-level/envs/b1z1_pickmulti.py index 4c89e55..c69fd47 100644 --- a/high-level/envs/b1z1_pickmulti.py +++ b/high-level/envs/b1z1_pickmulti.py @@ -122,7 +122,7 @@ def _create_envs(self): self.table_heights = torch.zeros(self.num_envs, device=self.device, dtype=torch.float) # table - self.table_dimz = 0.25 + self.table_dimz = 0.45 self.table_dims = gymapi.Vec3(0.6, 1.0, self.table_dimz) table_options = gymapi.AssetOptions() table_options.fix_base_link = True diff --git a/high-level/train_multistate.py b/high-level/train_multistate.py index 7763cca..e088954 100644 --- a/high-level/train_multistate.py +++ b/high-level/train_multistate.py @@ -21,7 +21,7 @@ from utils.config import load_cfg, get_params, copy_cfg import utils.wrapper as wrapper -set_seed(43) +set_seed(101) def create_env(cfg, args): cfg["env"]["enableDebugVis"] = args.debugvis @@ -36,6 +36,9 @@ def create_env(cfg, args): robot_start_pose = (-2.00, 0, 0.55) if args.eval: robot_start_pose = (-0.85, 0, 0.55) + + # python train_multistate.py --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "some descriptions" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick + _env = eval(args.task)(cfg=cfg, rl_device=args.rl_device, sim_device=args.sim_device, graphics_device_id=args.graphics_device_id, headless=args.headless, use_roboinfo=args.roboinfo, observe_gait_commands=args.observe_gait_commands, no_feature=args.no_feature, mask_arm=args.mask_arm, pitch_control=args.pitch_control, @@ -110,7 +113,7 @@ def get_trainer(is_eval=False): args = get_params() args.eval = is_eval args.wandb = args.wandb and (not args.eval) and (not args.debug) - cfg_file = "b1z1_" + args.task[4:].lower() + ".yaml" + cfg_file = "b1z1_" + args.task[4:].lower() + ".yaml" # B1Z1PickMulti -> b1z1_pick_multi.yaml file_path = "data/cfg/" + cfg_file if args.resume: diff --git a/low-level/legged_gym/envs/base/legged_robot.py b/low-level/legged_gym/envs/base/legged_robot.py index 6cf95c4..18a3351 100644 --- a/low-level/legged_gym/envs/base/legged_robot.py +++ b/low-level/legged_gym/envs/base/legged_robot.py @@ -783,7 +783,7 @@ def _get_env_origins(self): self.env_origins[:, 1] = spacing * yy.flatten()[:self.num_envs] self.env_origins[:, 2] = 0. - def _parse_cfg(self, cfg): + def _parse_cfg(self, cfg): # load legged_robot_config.py self.dt = self.cfg.control.decimation * self.sim_params.dt self.obs_scales = self.cfg.normalization.obs_scales self.reward_scales = class_to_dict(self.cfg.rewards.scales) From 05ac45043596356e619b06ee7a814699c935a0d1 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Sat, 30 Aug 2025 23:34:15 +0200 Subject: [PATCH 11/27] update --- high-level/train_multistate.py | 55 +++++++++++++++++++++++++++++++++- 1 file changed, 54 insertions(+), 1 deletion(-) diff --git a/high-level/train_multistate.py b/high-level/train_multistate.py index e088954..93fa4e7 100644 --- a/high-level/train_multistate.py +++ b/high-level/train_multistate.py @@ -115,6 +115,59 @@ def get_trainer(is_eval=False): args.wandb = args.wandb and (not args.eval) and (not args.debug) cfg_file = "b1z1_" + args.task[4:].lower() + ".yaml" # B1Z1PickMulti -> b1z1_pick_multi.yaml file_path = "data/cfg/" + cfg_file + + print("Arguments passed to get_trainer:") + + # Arguments passed to get_trainer: + # task: B1Z1PickMulti + # timesteps: 60000 + # control_freq: None + # rl_device: cuda:0 + # sim_device: cuda:0 + # graphics_device_id: -1 + # headless: True + # wandb: True + # wandb_project: b1-pick-multi-teacher + # wandb_name: some descriptions + # checkpoint: + # experiment_dir: b1-pick-multi-teacher + # debugvis: False + # save_image: False + # debug: False + # wrist_seg: False + # front_only: False + # seperate: False + # teacher_ckpt_path: + # resume: False + # roboinfo: True + # observe_gait_commands: True + # small_value_set_zero: True + # fixed_base: False + # use_tanh: False + # reach_only: False + # record_video: False + # last_commands: False + # no_feature: False + # mask_arm: False + # mlp_stu: False + # depth_random: False + # pitch_control: False + # pred_success: False + # near_goal_stop: False + # obj_move_prob: 0.0 + # rand_control: True + # arm_delay: False + # rand_cmd_scale: False + # rand_depth_clip: False + # stop_pick: True + # arm_kp: 40 + # arm_kd: 2 + # table_height: None + # seed: 43 + # eval: False + + for key, value in vars(args).items(): + print(f"{key}: {value}") if args.resume: experiment_dir = os.path.join(args.experiment_dir, args.wandb_name) @@ -133,7 +186,7 @@ def get_trainer(is_eval=False): file_path = os.path.join(experiment_dir, cfg_file) print("Find the latest checkpoint: ", args.checkpoint) - print("Using config file: ", file_path) + print("Using config file: ", file_path) # data/cfg/b1z1_pickmulti.yaml cfg = load_cfg(file_path) cfg['env']['wandb'] = args.wandb From 0c15d12d039323bc6c280d971f8f72c1ec63c413 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Sat, 30 Aug 2025 23:43:21 +0200 Subject: [PATCH 12/27] b1z1_pickmulti.yaml maxEpisodeLength: 200 #200 approaching: 5 --- high-level/data/cfg/b1z1_pickmulti.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index b955458..4c043a9 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -4,7 +4,7 @@ env: envSpacing: 5.0 enableDebugVis: False isFlagrun: False - maxEpisodeLength: 150 #200 + maxEpisodeLength: 200 #200 pdControl: True powerScale: 1.0 controlFrequencyInv: 4 @@ -228,7 +228,7 @@ reward: base_height_target: 0.55 only_positive_rewards: False scales: - approaching: 0.5 + approaching: 5 lifting: 1.0 pick_up: 3.5 # 0.5 acc_penalty: -0.001 From f51af4818becf379e01a68e4dd5d595b81413022 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Sun, 31 Aug 2025 10:14:44 +0200 Subject: [PATCH 13/27] update --- high-level/README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/high-level/README.md b/high-level/README.md index aeb3433..cf881ec 100644 --- a/high-level/README.md +++ b/high-level/README.md @@ -82,3 +82,5 @@ It should be a maximum of 60000 timesteps for successful student policy training [train_multistate_asym.py](./train_multistate_asym.py) is a try of using asymetric PPO for training the high-level policy (i.e, vision-based policy and privilaged value function), it is training inefficient and is not comparable to the teacher-student as it cannot parallel too many environments due to the depth images consumption. command: + +python train_multistate.py --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "some descriptions" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick From 5a6e2c834bf8356404651279349d6d84362e0aab Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Sun, 31 Aug 2025 15:59:12 +0200 Subject: [PATCH 14/27] update --- high-level/README.md | 4 +++- high-level/data/cfg/b1z1_pickmulti.yaml | 4 ++-- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/high-level/README.md b/high-level/README.md index cf881ec..3f8fac7 100644 --- a/high-level/README.md +++ b/high-level/README.md @@ -83,4 +83,6 @@ It should be a maximum of 60000 timesteps for successful student policy training command: -python train_multistate.py --rl_device "cuda:0" --sim_device "cuda:0" --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "some descriptions" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick +python train_multistate.py --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --wandb --wandb_project "b1-pick-multi-teacher" --wandb_name "policy2" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick + +python play_multistate.py --task B1Z1PickMulti --checkpoint /home/wang/Desktop/visual_wholebody/high-level/b1-pick-multi-teacher/policy_2/checkpoints/agent_30001.pt --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index 4c043a9..949dba7 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -228,9 +228,9 @@ reward: base_height_target: 0.55 only_positive_rewards: False scales: - approaching: 5 + approaching: 5.0 lifting: 1.0 - pick_up: 3.5 # 0.5 + pick_up: 5.0 # 0.5 acc_penalty: -0.001 command_penalty: -1.0 command_reward: 0.25 From b03992027740837c45e916d83d7617550df6f334 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Wed, 3 Sep 2025 15:47:53 +0200 Subject: [PATCH 15/27] update --- high-level/data/cfg/b1z1_pickmulti.yaml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index 949dba7..026d5de 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -234,15 +234,15 @@ reward: acc_penalty: -0.001 command_penalty: -1.0 command_reward: 0.25 - standpick: 0.25 + standpick: 0.25 # no found in reward_vec_task.py action_rate: -0.001 ee_orn: 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 base_approaching: 0.01 # 0.05 - grasp_base_height: 0.5 - gripper_rate: 0.0 # -0.1 + grasp_base_height: 0.5 # no found in reward_vec_task.py + gripper_rate: -0.1 # -0.1 sensor: depth_clip_lower: 0.15 From b9759affc078a69d9e53bcd6239afd136d2a6d8c Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Wed, 3 Sep 2025 15:59:27 +0200 Subject: [PATCH 16/27] update reward reward: base_height_target: 0.55 only_positive_rewards: False scales: approaching: 5.0 lifting: 1.0 pick_up: 5.0 # 0.5 acc_penalty: -0.001 command_penalty: -1.0 command_reward: 0.25 standpick: 0.25 # no found in reward_vec_task.py action_rate: -0.001 ee_orn: 0.05 # 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 base_approaching: 0.01 # 0.05 grasp_base_height: 0.5 # no found in reward_vec_task.py gripper_rate: -0.1 # -0.1 --- high-level/data/cfg/b1z1_pickmulti.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index 026d5de..0f7b294 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -236,7 +236,7 @@ reward: command_reward: 0.25 standpick: 0.25 # no found in reward_vec_task.py action_rate: -0.001 - ee_orn: 0.01 + ee_orn: 0.05 # 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 From 7b38f160462c61c838f57ded3f99b1d521f1741b Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Wed, 3 Sep 2025 16:01:50 +0200 Subject: [PATCH 17/27] update policy 4 --- low-level/README.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/low-level/README.md b/low-level/README.md index b33f132..c3a5dd8 100644 --- a/low-level/README.md +++ b/low-level/README.md @@ -39,4 +39,8 @@ To choose a good low-level policy that can be further used for training the high command line -python play.py --exptid gait_tuning_v2 --task b1z1 --proj_name b1z1-low --checkpoint 13000 --observe_gait_commands +python play_multistate.py --task B1Z1PickMulti --checkpoint /home/wang/Desktop/visual_wholebody/high-level/b1-pick-multi-teacher/policy_2/checkpoints/agent_30001.pt --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick + + +python train_multistate.py --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --w +andb --wandb_project "b1-pick-multi-teacher" --wandb_name "policy_4" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick From 1bf299dabc2113ffea6a4ab168a030737c21c73c Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Wed, 3 Sep 2025 15:59:27 +0200 Subject: [PATCH 18/27] update reward eeorn reward: base_height_target: 0.55 only_positive_rewards: False scales: approaching: 5.0 lifting: 1.0 pick_up: 5.0 # 0.5 acc_penalty: -0.001 command_penalty: -1.0 command_reward: 0.25 standpick: 0.25 # no found in reward_vec_task.py action_rate: -0.001 ee_orn: 0.05 # 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 base_approaching: 0.01 # 0.05 grasp_base_height: 0.5 # no found in reward_vec_task.py gripper_rate: -0.1 # -0.1 --- high-level/data/cfg/b1z1_pickmulti.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index 026d5de..0f7b294 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -236,7 +236,7 @@ reward: command_reward: 0.25 standpick: 0.25 # no found in reward_vec_task.py action_rate: -0.001 - ee_orn: 0.01 + ee_orn: 0.05 # 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 From 716a2e18bf874026ca985d5bb9c2dbc9d1dcf8e0 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Wed, 3 Sep 2025 16:01:50 +0200 Subject: [PATCH 19/27] update policy 4 --- low-level/README.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/low-level/README.md b/low-level/README.md index b33f132..c3a5dd8 100644 --- a/low-level/README.md +++ b/low-level/README.md @@ -39,4 +39,8 @@ To choose a good low-level policy that can be further used for training the high command line -python play.py --exptid gait_tuning_v2 --task b1z1 --proj_name b1z1-low --checkpoint 13000 --observe_gait_commands +python play_multistate.py --task B1Z1PickMulti --checkpoint /home/wang/Desktop/visual_wholebody/high-level/b1-pick-multi-teacher/policy_2/checkpoints/agent_30001.pt --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick + + +python train_multistate.py --timesteps 60000 --headless --task B1Z1PickMulti --experiment_dir b1-pick-multi-teacher --w +andb --wandb_project "b1-pick-multi-teacher" --wandb_name "policy_4" --roboinfo --observe_gait_commands --small_value_set_zero --rand_control --stop_pick From c6892ab1cacf1ca4b7ba015325e9e08751b9e307 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Thu, 4 Sep 2025 11:56:36 +0200 Subject: [PATCH 20/27] policy 5: scales->approaching = 10.0 reward: base_height_target: 0.55 only_positive_rewards: False scales: approaching: 10.0 lifting: 1.0 pick_up: 5.0 # 0.5 acc_penalty: -0.001 command_penalty: -1.0 command_reward: 0.25 standpick: 0.25 # no found in reward_vec_task.py action_rate: -0.001 ee_orn: 0.05 # 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 base_approaching: 0.01 # 0.05 grasp_base_height: 0.5 # no found in reward_vec_task.py gripper_rate: -0.1 # -0.1 --- high-level/data/cfg/b1z1_pickmulti.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index 0f7b294..aa9811e 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -228,7 +228,7 @@ reward: base_height_target: 0.55 only_positive_rewards: False scales: - approaching: 5.0 + approaching: 10.0 lifting: 1.0 pick_up: 5.0 # 0.5 acc_penalty: -0.001 From 24553bf4113bab9fbd2f421af0eec3f9d0ceecc4 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Thu, 4 Sep 2025 11:58:27 +0200 Subject: [PATCH 21/27] syn --- high-level/data/cfg/b1z1_pickmulti.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index aa9811e..f781c3e 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -234,7 +234,7 @@ reward: acc_penalty: -0.001 command_penalty: -1.0 command_reward: 0.25 - standpick: 0.25 # no found in reward_vec_task.py + standpick: 0.25 # no found in reward_vec_task.py # action_rate: -0.001 ee_orn: 0.05 # 0.01 base_dir: 0.25 From 17dbf569022fecc18f90340901d301281c6a08d7 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Thu, 4 Sep 2025 17:39:21 +0200 Subject: [PATCH 22/27] update parameter ee_orn: 0.1 reward: base_height_target: 0.55 only_positive_rewards: False scales: approaching: 5.0 lifting: 1.0 pick_up: 5.0 # 0.5 acc_penalty: -0.001 command_penalty: -1.0 command_reward: 0.25 standpick: 0.25 # no found in reward_vec_task.py # action_rate: -0.001 ee_orn: 0.1 # 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 base_approaching: 0.01 # 0.05 grasp_base_height: 0.5 # no found in reward_vec_task.py gripper_rate: -0.1 # -0.1 --- high-level/data/cfg/b1z1_pickmulti.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index f781c3e..fef3123 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -228,7 +228,7 @@ reward: base_height_target: 0.55 only_positive_rewards: False scales: - approaching: 10.0 + approaching: 5.0 lifting: 1.0 pick_up: 5.0 # 0.5 acc_penalty: -0.001 @@ -236,7 +236,7 @@ reward: command_reward: 0.25 standpick: 0.25 # no found in reward_vec_task.py # action_rate: -0.001 - ee_orn: 0.05 # 0.01 + ee_orn: 0.1 # 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 From d1a7b9e93fde17091fe4c2a857721632408cabb5 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Thu, 4 Sep 2025 17:39:21 +0200 Subject: [PATCH 23/27] update parameter ee_orn: 0.1 policy 5 again reward: base_height_target: 0.55 only_positive_rewards: False scales: approaching: 5.0 lifting: 1.0 pick_up: 5.0 # 0.5 acc_penalty: -0.001 command_penalty: -1.0 command_reward: 0.25 standpick: 0.25 # no found in reward_vec_task.py # action_rate: -0.001 ee_orn: 0.1 # 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 base_approaching: 0.01 # 0.05 grasp_base_height: 0.5 # no found in reward_vec_task.py gripper_rate: -0.1 # -0.1 --- high-level/data/cfg/b1z1_pickmulti.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/high-level/data/cfg/b1z1_pickmulti.yaml b/high-level/data/cfg/b1z1_pickmulti.yaml index f781c3e..fef3123 100644 --- a/high-level/data/cfg/b1z1_pickmulti.yaml +++ b/high-level/data/cfg/b1z1_pickmulti.yaml @@ -228,7 +228,7 @@ reward: base_height_target: 0.55 only_positive_rewards: False scales: - approaching: 10.0 + approaching: 5.0 lifting: 1.0 pick_up: 5.0 # 0.5 acc_penalty: -0.001 @@ -236,7 +236,7 @@ reward: command_reward: 0.25 standpick: 0.25 # no found in reward_vec_task.py # action_rate: -0.001 - ee_orn: 0.05 # 0.01 + ee_orn: 0.1 # 0.01 base_dir: 0.25 rad_penalty: 0.0 base_ang_pen: 0.0 From c8731d5c77cfa918f4628a99ba258c90c47f71d9 Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Fri, 5 Sep 2025 16:50:14 +0200 Subject: [PATCH 24/27] Create high-level-code.txt --- high-level-code.txt | 48 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) create mode 100644 high-level-code.txt diff --git a/high-level-code.txt b/high-level-code.txt new file mode 100644 index 0000000..1997940 --- /dev/null +++ b/high-level-code.txt @@ -0,0 +1,48 @@ +Visual Whole-Body Control for Legged Loco-Manipulation + +├── b1-pick-multi-teacher == policy +├── data == robot model; object_model(.obj,.npy点云特征数据,.urdf.visual.py);cfg配置 +├── envs == b1z1_pickmulti.py +├── experiments +├── learning +├── modules +├── play_multi_bc_deter.py +├── play_multistate.py +├── __pycache__ +├── README.md +├── test_pointcloud.py +├── train_multi_bc_deter.py +├── train_multistate_asym.py +├── train_multistate.py +├── utils +└── wandb + +Reward +_reward_base_height:它测量机器人根部的当前高度跟一个给定目标高度的差异,然后根据这个差异计算一个奖励,距离目标越近奖励越大, 且奖励以指数的形式衰减。 + +_reward_approaching:比之前更靠近方块了,给奖励,奖励数值根据你靠近的幅度变化,但不会无限增大。如果你距离太远了,就不给奖励。 +_reward_lifting: 如果成功把物体举得比之前更高,给奖励,奖励根据提升的高度大小变化,但不会无限大。一旦物体已经举起来了,就不再发奖励。 +_reward_pick_up: 该 _reward_pick_up 函数确保机器人获得明确的成功奖励,且通过“持物计数器”机制保证所称的“成功拾取”是稳定的抓取行为,从而引导机器人学会稳健的推出物体执行策略,最终实现论文中较高的抓取成功率和灵活多次尝试能力 +_reward_acc_penalty: 衡量机械臂运动中的速度变化率(即加速度),并对高加速度动作进行惩罚,促进机械臂动作平滑和稳定性。 + +_reward_command_reward: 只对机器人距离物体较近(小于0.6米)的位置赋予奖励。奖励数值是对机器人执行的第一个指令分量(常是线速度或位移等命令)的绝对 + +_reward_command_penalty: 值取负,再做指数函数,意味着命令越小(越接近0),奖励越大,鼓励机器人在接近物体时保持命令输入平稳或较小。 +该函数用于惩罚机器人在接近物体时施加过大的动作命令,促使动作更加平滑和节制。 + +_reward_action_rate: 动作变化越大,奖励越大 动作中线速度和偏航角速度变化越大,奖励越大 + +_reward_ee_orn: 算机器人“手”的朝向和目标物体之间的对齐程度,奖励机器人“手臂的方向越对着目标,奖励越高” + +_reward_base_dir: 该函数根据机器人基座的朝向和目标物体相对位置方向,计算两者的方向一致度并返回对应奖励。奖励越高表示机器人越“正对”目标物体,这对于基座调整朝向、朝向目标物体抓取等任务极为关键。 + +_reward_rad_penalty:这个函数对应机器人的奖励设计,目的是通过对距离的软限制引导机器人让末端执行器位置保持在理想的半径周围,避免过近或过远,保证机械臂动作既有效又安全。 + +_reward_base_ang_pen: 这个函数计算一个与机器人的“基座角速度”相关的奖励,用于惩罚基座的旋转速度,鼓励机器人保持稳定。 + +_reward_base_approaching: 它是计算机器人“身体底座”(base)与目标物体在平面上的距离接近度,并基于距离的接近程度给出奖励。距离越接近一个预设目标距离,奖励越高。 + +: 该函数衡量并奖励夹爪动作的变化速度,以促使机器人在操作过程中灵活调整夹爪开合,提升操控精度和抓取成功率。同时,训练初期奖励关闭,保证训练稳定,防止夹爪动作变化过早产生不必要的影响。 + +_reward_standpick: 机器人如果靠近目标物体(距离够近),并且它的移动速度不是太快(在设定限制内),就奖励1分。靠近目标物体(增加成功概率),保持合理的移动速度(稳健动作)且训练初期不急于给奖励,让机器人先积累经验。 +_reward_grasp_base_height: 用于奖励机器人抬起物体的能力。它依赖于一个更基础的奖励 _reward_base_height,可能衡量抬起来的高度。 From a80996ce68572bfda766f22b315f7713388c788f Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Fri, 5 Sep 2025 18:09:06 +0200 Subject: [PATCH 25/27] Update high-level-code.txt --- high-level-code.txt | 29 ----------------------------- 1 file changed, 29 deletions(-) diff --git a/high-level-code.txt b/high-level-code.txt index 1997940..3d049ba 100644 --- a/high-level-code.txt +++ b/high-level-code.txt @@ -17,32 +17,3 @@ Visual Whole-Body Control for Legged Loco-Manipulation ├── utils └── wandb -Reward -_reward_base_height:它测量机器人根部的当前高度跟一个给定目标高度的差异,然后根据这个差异计算一个奖励,距离目标越近奖励越大, 且奖励以指数的形式衰减。 - -_reward_approaching:比之前更靠近方块了,给奖励,奖励数值根据你靠近的幅度变化,但不会无限增大。如果你距离太远了,就不给奖励。 -_reward_lifting: 如果成功把物体举得比之前更高,给奖励,奖励根据提升的高度大小变化,但不会无限大。一旦物体已经举起来了,就不再发奖励。 -_reward_pick_up: 该 _reward_pick_up 函数确保机器人获得明确的成功奖励,且通过“持物计数器”机制保证所称的“成功拾取”是稳定的抓取行为,从而引导机器人学会稳健的推出物体执行策略,最终实现论文中较高的抓取成功率和灵活多次尝试能力 -_reward_acc_penalty: 衡量机械臂运动中的速度变化率(即加速度),并对高加速度动作进行惩罚,促进机械臂动作平滑和稳定性。 - -_reward_command_reward: 只对机器人距离物体较近(小于0.6米)的位置赋予奖励。奖励数值是对机器人执行的第一个指令分量(常是线速度或位移等命令)的绝对 - -_reward_command_penalty: 值取负,再做指数函数,意味着命令越小(越接近0),奖励越大,鼓励机器人在接近物体时保持命令输入平稳或较小。 -该函数用于惩罚机器人在接近物体时施加过大的动作命令,促使动作更加平滑和节制。 - -_reward_action_rate: 动作变化越大,奖励越大 动作中线速度和偏航角速度变化越大,奖励越大 - -_reward_ee_orn: 算机器人“手”的朝向和目标物体之间的对齐程度,奖励机器人“手臂的方向越对着目标,奖励越高” - -_reward_base_dir: 该函数根据机器人基座的朝向和目标物体相对位置方向,计算两者的方向一致度并返回对应奖励。奖励越高表示机器人越“正对”目标物体,这对于基座调整朝向、朝向目标物体抓取等任务极为关键。 - -_reward_rad_penalty:这个函数对应机器人的奖励设计,目的是通过对距离的软限制引导机器人让末端执行器位置保持在理想的半径周围,避免过近或过远,保证机械臂动作既有效又安全。 - -_reward_base_ang_pen: 这个函数计算一个与机器人的“基座角速度”相关的奖励,用于惩罚基座的旋转速度,鼓励机器人保持稳定。 - -_reward_base_approaching: 它是计算机器人“身体底座”(base)与目标物体在平面上的距离接近度,并基于距离的接近程度给出奖励。距离越接近一个预设目标距离,奖励越高。 - -: 该函数衡量并奖励夹爪动作的变化速度,以促使机器人在操作过程中灵活调整夹爪开合,提升操控精度和抓取成功率。同时,训练初期奖励关闭,保证训练稳定,防止夹爪动作变化过早产生不必要的影响。 - -_reward_standpick: 机器人如果靠近目标物体(距离够近),并且它的移动速度不是太快(在设定限制内),就奖励1分。靠近目标物体(增加成功概率),保持合理的移动速度(稳健动作)且训练初期不急于给奖励,让机器人先积累经验。 -_reward_grasp_base_height: 用于奖励机器人抬起物体的能力。它依赖于一个更基础的奖励 _reward_base_height,可能衡量抬起来的高度。 From d2d8ae9c34d8ff6c370e57bec71da769882bbe8d Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Fri, 5 Sep 2025 18:09:56 +0200 Subject: [PATCH 26/27] Update high-level-code.txt --- high-level-code.txt | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/high-level-code.txt b/high-level-code.txt index 3d049ba..1997940 100644 --- a/high-level-code.txt +++ b/high-level-code.txt @@ -17,3 +17,32 @@ Visual Whole-Body Control for Legged Loco-Manipulation ├── utils └── wandb +Reward +_reward_base_height:它测量机器人根部的当前高度跟一个给定目标高度的差异,然后根据这个差异计算一个奖励,距离目标越近奖励越大, 且奖励以指数的形式衰减。 + +_reward_approaching:比之前更靠近方块了,给奖励,奖励数值根据你靠近的幅度变化,但不会无限增大。如果你距离太远了,就不给奖励。 +_reward_lifting: 如果成功把物体举得比之前更高,给奖励,奖励根据提升的高度大小变化,但不会无限大。一旦物体已经举起来了,就不再发奖励。 +_reward_pick_up: 该 _reward_pick_up 函数确保机器人获得明确的成功奖励,且通过“持物计数器”机制保证所称的“成功拾取”是稳定的抓取行为,从而引导机器人学会稳健的推出物体执行策略,最终实现论文中较高的抓取成功率和灵活多次尝试能力 +_reward_acc_penalty: 衡量机械臂运动中的速度变化率(即加速度),并对高加速度动作进行惩罚,促进机械臂动作平滑和稳定性。 + +_reward_command_reward: 只对机器人距离物体较近(小于0.6米)的位置赋予奖励。奖励数值是对机器人执行的第一个指令分量(常是线速度或位移等命令)的绝对 + +_reward_command_penalty: 值取负,再做指数函数,意味着命令越小(越接近0),奖励越大,鼓励机器人在接近物体时保持命令输入平稳或较小。 +该函数用于惩罚机器人在接近物体时施加过大的动作命令,促使动作更加平滑和节制。 + +_reward_action_rate: 动作变化越大,奖励越大 动作中线速度和偏航角速度变化越大,奖励越大 + +_reward_ee_orn: 算机器人“手”的朝向和目标物体之间的对齐程度,奖励机器人“手臂的方向越对着目标,奖励越高” + +_reward_base_dir: 该函数根据机器人基座的朝向和目标物体相对位置方向,计算两者的方向一致度并返回对应奖励。奖励越高表示机器人越“正对”目标物体,这对于基座调整朝向、朝向目标物体抓取等任务极为关键。 + +_reward_rad_penalty:这个函数对应机器人的奖励设计,目的是通过对距离的软限制引导机器人让末端执行器位置保持在理想的半径周围,避免过近或过远,保证机械臂动作既有效又安全。 + +_reward_base_ang_pen: 这个函数计算一个与机器人的“基座角速度”相关的奖励,用于惩罚基座的旋转速度,鼓励机器人保持稳定。 + +_reward_base_approaching: 它是计算机器人“身体底座”(base)与目标物体在平面上的距离接近度,并基于距离的接近程度给出奖励。距离越接近一个预设目标距离,奖励越高。 + +: 该函数衡量并奖励夹爪动作的变化速度,以促使机器人在操作过程中灵活调整夹爪开合,提升操控精度和抓取成功率。同时,训练初期奖励关闭,保证训练稳定,防止夹爪动作变化过早产生不必要的影响。 + +_reward_standpick: 机器人如果靠近目标物体(距离够近),并且它的移动速度不是太快(在设定限制内),就奖励1分。靠近目标物体(增加成功概率),保持合理的移动速度(稳健动作)且训练初期不急于给奖励,让机器人先积累经验。 +_reward_grasp_base_height: 用于奖励机器人抬起物体的能力。它依赖于一个更基础的奖励 _reward_base_height,可能衡量抬起来的高度。 From b0de1b575c122c67b49693480ba2cc4083ab6f6c Mon Sep 17 00:00:00 2001 From: Weijie Wang Date: Fri, 5 Sep 2025 18:22:57 +0200 Subject: [PATCH 27/27] Update high-level-code.txt