Official implementation of "Learning to Anchor Visual Odometry: KAN-Based Pose Regression for Planetary Landing"
Accepted to IEEE Robotics and Automation Letters (RA-L), December 2025
Xubo Luo1, Zhaojin Li2, Xue Wan2, Wei Zhang2, Leizheng Shu2
1University of Chinese Academy of Sciences 2Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences
KANLoc is a monocular localization framework designed for autonomous lunar landing that tightly couples visual odometry (VO) with a lightweight Kolmogorov-Arnold Network (KAN)-based absolute pose regressor. By fusing high-frequency relative tracking with sparse but robust global anchors, KANLoc achieves:
- 32% reduction in translation error
- 45% reduction in rotation error
- Real-time performance at ≥15 FPS
- Remarkable parameter efficiency (35.6M parameters)
If you find this work useful, please cite:
@article{luo2025kanloc,
title={Learning to Anchor Visual Odometry: KAN-Based Pose Regression for Planetary Landing},
author={Luo, Xubo and Li, Zhaojin and Wan, Xue and Zhang, Wei and Shu, Leizheng},
journal={IEEE Robotics and Automation Letters},
year={2025},
publisher={IEEE}
}We thank the following open-source projects:
- ORB-SLAM3 for visual odometry
- DINO-v2 for visual features
- g2o for bundle adjustment
- AirSim for synthetic data generation
This project is licensed under the MIT License - see the LICENSE file for details.
Made with ❤️ for the lunar exploration community
