The Humanoid Intelligence Team from FudanNLP and OpenMOSS
For more information, please refer to our project page and technical report.
Humanoid robots are capable of performing various actions such as greeting, dancing and even backflipping. However, these motions are often hard-coded or specifically trained, which limits their versatility. In this work, we present FRoM-W11, an open-source framework designed to achieve general humanoid whole-body motion control using natural language.
To universally understand natural language and generate corresponding motions, as well as enable various humanoid robots to stably execute these motions in the physical world under gravity, FRoM-W1 operates in two stages:
(a) H-GPT
Utilizing massive human data, a large-scale language-driven human whole-body motion generation model is trained to generate diverse natural behaviors. We further leverage the Chain-of-Thought technique to improve the model's generalization in instruction understanding.
(b) H-ACT
After retargeting generated human whole-body motions into robot-specific actions, a motion controller that is pretrained and further fine-tuned through reinforcement learning in physical simulation enables humanoid robots to accurately and stably perform corresponding actions. It is then deployed on real robots via a modular sim-to-real module.
We extensively evaluate FRoM-W1 on Unitree H1 and G1 robots. Results demonstrate superior performance on the HumanML3D-X benchmark for human whole-body motion generation, and our introduced reinforcement learning fine-tuning consistently improves both motion tracking accuracy and task success rates of these humanoid robots. We open-source the entire FRoM-W1 framework and hope it will advance the development of humanoid intelligence.
- 🎉 Release the initial codebase for the H-GPT and H-ACT modules
- 🎉 Release the amazing humanoid-robot deployment framework RoboJuDo
- Release the CoT datasets of the HumanML3D-X and Motion-X benchmarks, and the δHumanML3D-X benchmark
- Release checkpoints for the baseline models, SMPL-X version of T2M, MotionDiffuse, MLD, T2M-GPT
- 🎉 Release the Technical Report and Project Page of FRoM-W1!
- More powerful models are working in progress
Due to license restrictions, we cannot publicly share all of the data. Here are the reference download and processing links for the relevant datasets:
H-GPT Module
| Dataset Name | Download Guide |
|---|---|
| HumanML3D-X | Please refer to the process in the Motion-X repo to download and process the corresponding AMASS data. The CoT part can be downloaded here. |
| δHumanML3D-X | After obtaining the HumanML3D-X data, replace the textual instructions in it with the perturbed versions provided here. |
| Motion-X | Please refer to the original Motion-X repo. Note that we did not use the Motion-X++ version; specifically, we used the version from [2024.2.6]. |
H-ACT Module
| Dataset Name | Download Guide |
|---|---|
| AMASS | Please refer to the download and processing procedures for the AMASS dataset in the human2humanoid project. |
| AMASS-H1 | The retargeted dataset for the Unitree H1 can be obtained from the link provided by human2humanoid. |
| AMASS-G1 | We provide a retargeted dataset for the Unitree G1, with the link available here. |
To keep the repo organized, we provide a subset of core model checkpoints below:
H-GPT Module
| Model Name | Download Guide |
|---|---|
| Eval Model | HuggingFace link, which were trained following the T2M pipeline with the SMPL-X format. |
| Baseline Models | HuggingFace link, including the SMPL-X version of the T2M, MotionDiffuse, MLD and T2M-GPT models. |
| H-GPT w.o. CoT | HuggingFace link, you can refer to this script to merge these LoRA parameters with the original Llama-3.1 model. |
| H-GPT | HuggingFace link, you can refer to this script to merge these LoRA parameters with the original Llama-3.1 model. |
| H-GPT++ w.o. CoT | HuggingFace link, you can refer to this script to merge these LoRA parameters with the original Llama-3.1 model. |
| H-GPT++ | HuggingFace link, you can refer to this script to merge these LoRA parameters with the original Llama-3.1 model. |
H-ACT Module
| Model Name | Download Guide |
|---|---|
| H1-Full | Teacher Policy, Student Policy |
| H1-Clean | Teacher Policy, Student Policy |
| G1-Full | Teacher Policy, Student Policy |
| G1-Clean | Teacher Policy, Student Policy |
If you require additional model checkpoints, please contact us.
conda create -n fromw1 python=3.10
conda activate fromw1
pip install -r ./H-GPT/requirements_deploy.txt
pip install -r ./H-ACT/retarget/requirements.txtH-GPT
- Download the H-GPT whole-body motion tokenizer and the motion generator from the HuggingFace.
- Replace the path to the motion tokenizer and the motion generator at line 55 & 78 of
./H-GPT/hGPT/configs/config_deployment_cot.yaml - Run
bash ./H-GPT/app.shto deploy the H-GPT model to a gradio app and generate human motions.
H-ACT
- Download the SMPL and MANO models and organize them according to the H-ACT README file.
- Run
python ./H-ACT/retarget/main.pyto retarget the generated human motions into humanoid robot-specific joint sequences.
After obtaining the redirected robot sequence, you can conveniently use our RoboJudo repo to track various strategies in both simulation and real-world scenarios.
Please refer to the corresponding H-GPT README file in the subfolder.
Please refer to the corresponding H-ACT README file in the subfolder.
We extend our gratitude to Biao Jiang for discussions and assistance regarding the motion generation models, to Tairan He and Ziwen Zhuang for their discussions and help in the motion tracking section.
And we thank all the relevant open-source datasets and open-source codes; it is these open-source projects that have propelled the advancement of the entire field!
If you find our work useful, please cite it in the following way:
@misc{li2026fromw1generalhumanoidwholebody,
title={FRoM-W1: Towards General Humanoid Whole-Body Control with Language Instructions},
author={Peng Li and Zihan Zhuang and Yangfan Gao and Yi Dong and Sixian Li and Changhao Jiang and Shihan Dou and Zhiheng Xi and Enyu Zhou and Jixuan Huang and Hui Li and Jingjing Gong and Xingjun Ma and Tao Gui and Zuxuan Wu and Qi Zhang and Xuanjing Huang and Yu-Gang Jiang and Xipeng Qiu},
year={2026},
eprint={2601.12799},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2601.12799},
}Welcome to star ⭐ our GitHub Repo, raise issues, and submit PRs!
Footnotes
-
Foundational Humanoid Robot Model - Whole-Body Control, Version 1 ↩
