Skip to content

[ArXiv 26] FRoM-W1: Towards General Humanoid Whole-Body Control with Language Instructions

License

Notifications You must be signed in to change notification settings

humanoidintelligence/FRoM-W1

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FRoM-W1: Towards General Humanoid Whole-Body Control with Language Instructions

FRoM-W1

The Humanoid Intelligence Team from FudanNLP and OpenMOSS

Project Webpage Paper on arXiv GitHub Code Hugging Face Data Hugging Face Model License

🌟 Introduction

FRoM-W1

For more information, please refer to our project page and technical report.

Humanoid robots are capable of performing various actions such as greeting, dancing and even backflipping. However, these motions are often hard-coded or specifically trained, which limits their versatility. In this work, we present FRoM-W11, an open-source framework designed to achieve general humanoid whole-body motion control using natural language.

To universally understand natural language and generate corresponding motions, as well as enable various humanoid robots to stably execute these motions in the physical world under gravity, FRoM-W1 operates in two stages:

(a) H-GPT
Utilizing massive human data, a large-scale language-driven human whole-body motion generation model is trained to generate diverse natural behaviors. We further leverage the Chain-of-Thought technique to improve the model's generalization in instruction understanding.

(b) H-ACT
After retargeting generated human whole-body motions into robot-specific actions, a motion controller that is pretrained and further fine-tuned through reinforcement learning in physical simulation enables humanoid robots to accurately and stably perform corresponding actions. It is then deployed on real robots via a modular sim-to-real module.

We extensively evaluate FRoM-W1 on Unitree H1 and G1 robots. Results demonstrate superior performance on the HumanML3D-X benchmark for human whole-body motion generation, and our introduced reinforcement learning fine-tuning consistently improves both motion tracking accuracy and task success rates of these humanoid robots. We open-source the entire FRoM-W1 framework and hope it will advance the development of humanoid intelligence.

🔥 Roadmap

  • 🎉 Release the initial codebase for the H-GPT and H-ACT modules
  • 🎉 Release the amazing humanoid-robot deployment framework RoboJuDo
  • Release the CoT datasets of the HumanML3D-X and Motion-X benchmarks, and the δHumanML3D-X benchmark
  • Release checkpoints for the baseline models, SMPL-X version of T2M, MotionDiffuse, MLD, T2M-GPT
  • 🎉 Release the Technical Report and Project Page of FRoM-W1!
  • More powerful models are working in progress

💾 Datasets

Due to license restrictions, we cannot publicly share all of the data. Here are the reference download and processing links for the relevant datasets:

H-GPT Module

Dataset Name Download Guide
HumanML3D-X Please refer to the process in the Motion-X repo to download and process the corresponding AMASS data. The CoT part can be downloaded here.
δHumanML3D-X After obtaining the HumanML3D-X data, replace the textual instructions in it with the perturbed versions provided here.
Motion-X Please refer to the original Motion-X repo. Note that we did not use the Motion-X++ version; specifically, we used the version from [2024.2.6].

H-ACT Module

Dataset Name Download Guide
AMASS Please refer to the download and processing procedures for the AMASS dataset in the human2humanoid project.
AMASS-H1 The retargeted dataset for the Unitree H1 can be obtained from the link provided by human2humanoid.
AMASS-G1 We provide a retargeted dataset for the Unitree G1, with the link available here.

🧠 Models

To keep the repo organized, we provide a subset of core model checkpoints below:

H-GPT Module

Model Name Download Guide
Eval Model HuggingFace link, which were trained following the T2M pipeline with the SMPL-X format.
Baseline Models HuggingFace link, including the SMPL-X version of the T2M, MotionDiffuse, MLD and T2M-GPT models.
H-GPT w.o. CoT HuggingFace link, you can refer to this script to merge these LoRA parameters with the original Llama-3.1 model.
H-GPT HuggingFace link, you can refer to this script to merge these LoRA parameters with the original Llama-3.1 model.
H-GPT++ w.o. CoT HuggingFace link, you can refer to this script to merge these LoRA parameters with the original Llama-3.1 model.
H-GPT++ HuggingFace link, you can refer to this script to merge these LoRA parameters with the original Llama-3.1 model.

H-ACT Module

Model Name Download Guide
H1-Full Teacher Policy, Student Policy
H1-Clean Teacher Policy, Student Policy
G1-Full Teacher Policy, Student Policy
G1-Clean Teacher Policy, Student Policy

If you require additional model checkpoints, please contact us.

🚀 Quick Start

Setup

conda create -n fromw1 python=3.10
conda activate fromw1
pip install -r ./H-GPT/requirements_deploy.txt
pip install -r ./H-ACT/retarget/requirements.txt

Inference

H-GPT

  1. Download the H-GPT whole-body motion tokenizer and the motion generator from the HuggingFace.
  2. Replace the path to the motion tokenizer and the motion generator at line 55 & 78 of ./H-GPT/hGPT/configs/config_deployment_cot.yaml
  3. Run bash ./H-GPT/app.sh to deploy the H-GPT model to a gradio app and generate human motions.

H-ACT

  1. Download the SMPL and MANO models and organize them according to the H-ACT README file.
  2. Run python ./H-ACT/retarget/main.py to retarget the generated human motions into humanoid robot-specific joint sequences.

Deployment

After obtaining the redirected robot sequence, you can conveniently use our RoboJudo repo to track various strategies in both simulation and real-world scenarios.

🛠️ Model Training and Evaluation

H-GPT

Please refer to the corresponding H-GPT README file in the subfolder.

H-ACT

Please refer to the corresponding H-ACT README file in the subfolder.

🙏 Acknowledgements

We extend our gratitude to Biao Jiang for discussions and assistance regarding the motion generation models, to Tairan He and Ziwen Zhuang for their discussions and help in the motion tracking section.

And we thank all the relevant open-source datasets and open-source codes; it is these open-source projects that have propelled the advancement of the entire field!

📄 Citation

If you find our work useful, please cite it in the following way:

@misc{li2026fromw1generalhumanoidwholebody,
      title={FRoM-W1: Towards General Humanoid Whole-Body Control with Language Instructions}, 
      author={Peng Li and Zihan Zhuang and Yangfan Gao and Yi Dong and Sixian Li and Changhao Jiang and Shihan Dou and Zhiheng Xi and Enyu Zhou and Jixuan Huang and Hui Li and Jingjing Gong and Xingjun Ma and Tao Gui and Zuxuan Wu and Qi Zhang and Xuanjing Huang and Yu-Gang Jiang and Xipeng Qiu},
      year={2026},
      eprint={2601.12799},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2601.12799}, 
}

Welcome to star ⭐ our GitHub Repo, raise issues, and submit PRs!

Footnotes

  1. Foundational Humanoid Robot Model - Whole-Body Control, Version 1

About

[ArXiv 26] FRoM-W1: Towards General Humanoid Whole-Body Control with Language Instructions

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%