-
Notifications
You must be signed in to change notification settings - Fork 86
Open
Labels
Description
It would be nice if the community could share the SOTA locomotion only or mobile manipulation models.
A new (locomotion only) one published today seems notable as:
- they test it on LeKiwi (hence should also work for aloha mini)
- they work zero shot without odometry
- they don't seem to require LIDAR / ultrasound / TOF sensors
- it has memory and avoid obstacles in real-time
https://steinate.github.io/logoplanner.github.io/
Though it's still not the graal, I'm looking for models that can identify in zero shot, objects and places based on implicit natural language requests, that scale to kilometers distances and that allow seamless switching to a manipulation model (ideally with great initial pose).
windht and liyiteng