class AlyyanAhmed:
def __init__(self):
self.role = "AI/ML Engineer"
self.focus_areas = [
"Computer Vision ποΈ",
"Agentic LLMs π€",
"Production MLOps βοΈ"
]
self.expertise = {
"pipeline": "research β training β deployment β monitoring",
"infrastructure": ["CI/CD", "Docker", "Model Serving", "Edge AI"],
"frameworks": ["LangChain", "LangGraph", "RAG", "Agentic Workflows"]
}
def current_mission(self):
return "Building intelligent systems that bridge research and production"π Facial Emotion Recognition - Full MLOps System
Tech Stack: ViT β’ DVC β’ MLflow β’ Docker β’ Gradio
Features:
- End-to-end ML pipeline with version control
- CI/CD automation for seamless deployment
- Interactive Gradio web interface
- Production deployment on Hugging Face Spaces
π€ Agentic SQL Assistant - LLM + MySQL Integration
Tech Stack: LangChain β’ LangGraph β’ RAG β’ MySQL
Features:
- Natural language to SQL conversion
- Real-time analytics and insights
- Coaching-style feedback system
- Hybrid RAG + SQL architecture
π©Ί Pneumonia Detection - Edge-Optimized CNN
Tech Stack: TensorFlow β’ TFLite β’ OpenCV
Features:
- Lightweight CNN architecture
- Optimized for low-resource environments
- Clinical-grade accuracy
- Mobile and edge device inference
π« Chest X-ray Multi-Class Classification
Tech Stack: PyTorch β’ Albumentations β’ scikit-learn
Features:
- Advanced data augmentation pipeline
- Comprehensive evaluation metrics
- Detailed classification reports
- Transfer learning with pre-trained models
π€ Smart Cleaning Bot - Edge AI + Robotics
Tech Stack: Raspberry Pi β’ Arduino β’ TensorFlow Lite
Features:
- Autonomous navigation system
- Real-time edge inference
- Advanced path planning algorithms
- Multi-sensor fusion
| π― Area | π Details |
|---|---|
| π€ Agentic AI | Tool use, memory systems, planning, multi-step reasoning |
| π Industrial RAG | Hybrid RAG + SQL architectures for enterprise |
| π Scalable ML | FastAPI, Docker, GPU/TPU deployments |
| π€ Robotics | Path planning, reinforcement learning, control systems |
| π» Local LLMs | vLLM, Ollama, efficient inference optimization |
