Releases: LightwheelAI/LW-BenchHub
v1.0.0
Welcome to LW-BenchHub!
LW-BenchHub is an large-scale end-to-end robotics simulation framework built on Isaac-Lab Arena for training robots to perform common daily tasks, with a current focus on kitchen manipulation and loco-manipulation. Developed by the Lightwheel team, it provides a comprehensive workflow from teleoperation data collection to reinforcement learning training. LW-BenchHub is built upon four key pillars:
-
High‑quality, highly‑generalizable kitchen‑asset environments — featuring 10 layouts, 10 style combinations, and high‑fidelity assets (pulled via Lightwheel SDK). The framework launches with 268 benchmark tasks (130 Lightwheel-LIBERO-Tasks, 138 Lightwheel-Robocasa-Tasks) covering manipulation, loco‑manipulation, table‑top actions, atomic skills, navigation, and long‑horizon compositional tasks. It offers one‑click switching across 7 adapted robot types (Unitree G1, PandaOmron, DoublePanda, Agilex Piper, ARX X7s, Franka, LeRobot SO100/101 Arm) comprising 27 specific robot variants.
-
Complete end‑to‑end data‑to‑policy pipeline — supporting the full sim‑to‑real loop: from teleoperation data collection and deterministic trajectory replay, to IL/RL model training, rigorous evaluation, and smooth deployment. A reproducible evaluation pipeline with unified metrics is provided. The sim‑to‑real pipeline has been successfully validated on real robots including the ARX X7s and Agilex Piper.
-
Intuitive and reproducible RL configuration design – Supports generic RL configuration for a class of robots and tasks through a decorator-based binding mechanism, enabling modular registration and effortless switching or reproduction of RL setups. Seamlessly integrates with open-source RL libraries such as rsl-rl and skrl.
-
Large-scale Kitchen Manipulation Dataset – Released a dataset with 219 unique tasks (89 from Lightwheel-Robocasa-Tasks, 130 from Lightwheel-LIBERO-Tasks) and 4 robots (LeRobot、ARX-X7s, Unitree G1, Agilex-Piper). The dataset contains 21,500 demonstration episodes (20,537,015 frames), with 50 episodes for each (robot, task) pair, captured in diverse, interactive kitchen environments.
This initial release establishes a solid foundation for scalable robot‑learning research, delivering production‑ready APIs, detailed documentation, and rich examples to help you get started quickly.