Large-scale, high-quality robotic manipulation demonstration dataset
Data-driven robotic manipulation learning depends on large-scale, high-quality expert demonstration datasets. However, existing datasets, which primarily rely on human teleoperated robot collection, are limited in terms of scalability, trajectory smoothness, and applicability across different robotic embodiments in real-world environments.
We present FastUMI-100K, a large-scale UMI-style multimodal demonstration dataset, designed to overcome these limitations and meet the growing complexity of real-world manipulation tasks. Collected by FastUMI, a novel robotic system featuring a modular, hardware-decoupled mechanical design and an integrated lightweight tracking system, FastUMI-100K offers a more scalable, flexible, and adaptable solution to fulfill the diverse requirements of real-world robot demonstration data.
| Feature | Description |
|---|---|
| π Scale | Over 100,000+ demonstration trajectories |
| π― Tasks | Covers 54 tasks and hundreds of object types |
| π Environment | Representative household environments |
| πΈ Multimodal | End-effector states, multi-view wrist-mounted fisheye images and textual annotations |
| β±οΈ Length | Each trajectory ranges from 120 to 500 frames |
| π― What's Next | π Description |
|---|---|
| π Paper | Detailed technical paper and experimental results |
| πΎ Dataset | Complete dataset download links |
| π§ Code | Source code and toolkits |
| π Documentation | Comprehensive usage documentation and tutorials |
More detailed information will be released soon, stay tuned!
β If this project helps you, please give us a star!
