Skip to content

LuoXubo/KANLoc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

KANLoc: Learning to Anchor Visual Odometry with KAN-Based Pose Regression for Planetary Landing

Paper arXiv Project Page License

Official implementation of "Learning to Anchor Visual Odometry: KAN-Based Pose Regression for Planetary Landing"

Accepted to IEEE Robotics and Automation Letters (RA-L), December 2025

Xubo Luo1, Zhaojin Li2, Xue Wan2, Wei Zhang2, Leizheng Shu2

1University of Chinese Academy of Sciences   2Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences


🎯 Overview

KANLoc is a monocular localization framework designed for autonomous lunar landing that tightly couples visual odometry (VO) with a lightweight Kolmogorov-Arnold Network (KAN)-based absolute pose regressor. By fusing high-frequency relative tracking with sparse but robust global anchors, KANLoc achieves:

  • 32% reduction in translation error
  • 45% reduction in rotation error
  • Real-time performance at ≥15 FPS
  • Remarkable parameter efficiency (35.6M parameters)

📐 Architecture


📝 Citation

If you find this work useful, please cite:

@article{luo2025kanloc,
  title={Learning to Anchor Visual Odometry: KAN-Based Pose Regression for Planetary Landing},
  author={Luo, Xubo and Li, Zhaojin and Wan, Xue and Zhang, Wei and Shu, Leizheng},
  journal={IEEE Robotics and Automation Letters},
  year={2025},
  publisher={IEEE}
}

🙏 Acknowledgments

We thank the following open-source projects:


📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


Made with ❤️ for the lunar exploration community

About

[RA-L 2025] Learning to Anchor Visual Odometry: KAN-Based Pose Regression for Planetary Landing

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors