Robotics Engineer β’ Vision-Language-Action Models β’ State-Space Architectures
I build intelligent robotic systems using VLA frameworks, Mamba-based models, and real-world robot platforms.
I'm focused on developing scalable Vision-Language-Action (VLA) models and applying advanced state-space architectures (Mamba, S4, S5) and Transformers to real robotic systems.
I have hands-on experience with manipulators, quadrupeds, AGVs, and multi-robot coordination.
I enjoy pushing the boundary between robotics, machine learning, and embodied intelligence.
- VLA model using Mamba SSM + Eagle Vision Encoder + Qwen LLM
- Handles complex manipulation & multi-step planning
- Trained on LIBERO, Meta-World, RoboCasa tasks
- Diffusion policy learning for real robot tasks
- Robust to multi-modal sensor noise
- Applied on Franka and ViperX manipulators
- State-space locomotion model for Unitree Go2/A1
- Stabilizes policies outdoors and on uneven terrain
- Vision-conditioned gait modulation
- Worked with Franka, ViperX, Go2, AGVs, diff-drive robots
- Multi-arm coordination: Sim cube transfer, Libero tasks
- RealSense RGB-D pipelines for grasping tasks



