I build real world AI systems for robotics, focused on humanoid perception, behavior intelligence, and embodied decision making.
My work sits at the intersection of applied machine learning, robotics systems engineering, and ondevice deployment where systems must survive noisy sensors, limited compute, and real environments, not just simulations.
Currently, I am an AI Intern in the R&D team at a humanoid robotics startup (I HUB Research & Robotics), working on end-to-end perception pipelines, speech -> intent -> action systems, behavior logic, ROS2-based architectures, and edge deployment on NVIDIA Jetson platforms.
AI Intern - Humanoid Robotics R&D (real robots, deployed systems)
TVS Automobiles - Industrial humanoid vision systems for alloy & paint defect detection
IEEE research on real-time multimodal microplastic detection
Founder of MaaKosh, maternal & neonatal health initiative
YOLOv8 · DINOv3 · SAM 2.1 · Depth Anything · DUSt3R
Whisper · VAD · Speech-to-Intent
TensorFlow · OpenCV · LLaVA-style architectures
Jetson Orin Nano · Jetson Nano
Docker · Linux · ROS2 deployment
Real-time inference under memory & latency constraints
STM32 · Arduino · Raspberry Pi
IMU · RGB & Depth Cameras · Encoders
Hardware–Software co-design
Development of a Real-Time Multimodal Sensor for Field-Based Microplastic Detection
IEEE Conference — IC3ECSBHI
Multimodal sensor fusion combining optical sensing, impedance analysis, and machine learning for real-time, field-deployable environmental monitoring.
Building systems that perceive, reason, and act — outside the lab.





