Skip to content
View Mohamed-Tamer-Nassr's full-sized avatar
🏁
🏁

Block or report Mohamed-Tamer-Nassr

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Mohamed Tamer | LLM Engineer & AI Innovator

Typing SVG

LinkedIn Email Portfolio


🌟 About Me

const mohamedTamer = {
  identity: "LLM Engineer & Agentic AI Specialist",
  location: "Egypt πŸ‡ͺπŸ‡¬",
  education: "AI @ Benha University",
  mission: "Building intelligent systems that understand, reason, and act",
  
  expertise: {
    llmEngineering: ["RAG", "Fine-tuning", "Machine Learning", "Context Engineering"],
    agenticAI: ["Multi-Agent Systems", "MCP", "Agentic AI system", "n8n AI Automation"],
    mlops: ["Docker", "CI/CD", "Model Monitoring", "Production Deployment"],
    fullStack: ["FastAPI", "React/Next.js", "Real-time AI Apps"]
  },
  
  currentFocus: [
    "πŸ€– Building production-ready AI agents",
    "πŸ”§ Fine-tuning LLMs with QLoRA/PEFT", 
    "⚑ RAG system optimization",
    "πŸš€ Full-stack LLM applications",
    "⚑ n8n Automation system"
  ],
  
  dreamRole: "LLM Engineer | AI Research Engineer | ML Engineer",
  philosophy: "Ship fast, iterate faster, deploy production-ready AI."
};

πŸ’‘ I don't just experiment with AI β€” I architect, fine-tune, and deploy it into real-world production systems.


🎯 What Sets Me Apart

🧠 LLM Engineering Mastery

  • πŸ”¬ RAG Systems: Building retrieval-augmented pipelines with vector DBs
  • 🎯 Fine-tuning: QLoRA, PEFT, LoRA for domain-specific models
  • πŸ“Š Evaluation: LLM-as-judge, RAGAS, human-in-the-loop metrics
  • πŸ”— Embeddings: Semantic search, chunking strategies, reranking

πŸ€– Agentic AI Expertise

  • ⚑ Agent Frameworks: LangGraph, CrewAI, AutoGen, OpenAI Agents SDK
  • πŸ› οΈ Tool Integration: Function calling, MCP servers, custom tools
  • 🧩 Multi-Agent: Orchestration, collaboration, specialized workflows
  • πŸ’Ύ Memory Systems: Short/long-term memory, context management

πŸš€ Production Engineering

  • 🐳 MLOps: Docker, Kubernetes, CI/CD pipelines
  • ☁️ Cloud Deploy: AWS SageMaker, Lambda, EC2
  • πŸ“ˆ Monitoring: Model drift, performance tracking, A/B testing
  • βš™οΈ Optimization: Quantization, caching, latency reduction

πŸ’» Full-Stack AI Development

  • ⚑ Backend: FastAPI, async processing, WebSockets
  • 🎨 Frontend: React, Next.js, real-time AI interfaces
  • πŸ”„ Integration: Streaming responses, state management
  • πŸ“± UX: Building intuitive AI-powered applications

πŸ’Ό Tech Arsenal

πŸ€– LLM & Generative AI

OpenAI LangChain LangGraph CrewAI AutoGen Anthropic Groq Ollama

πŸ”§ LLM Engineering Tools

Hugging Face LlamaIndex ChromaDB Weaviate FAISS Qdrant

🎯 Fine-tuning & Training

QLoRA PEFT LoRA W&B

🧠 ML & Deep Learning

PyTorch TensorFlow Scikit Learn Transformers Pandas NumPy

πŸš€ Backend & APIs

Python FastAPI Redis

🎨 Frontend & Full-Stack

React Next.js TypeScript Vercel

☁️ MLOps & DevOps

Docker AWS GitHub Actions MLflow

πŸ› οΈ AI Automation & Tools

n8n

πŸ“Š Data & Monitoring

PostgreSQL MongoDB Grafana Prometheus


πŸ€– LLM & Agentic AI Expertise

πŸ”¬ RAG Systems Engineering

Pipeline Architecture:

  • πŸ“š Document processing & chunking strategies
  • πŸ” Vector database optimization (Chroma, Pinecone, Qdrant)
  • 🎯 Hybrid search (dense + sparse retrieval)
  • πŸ”„ Reranking & context compression
  • πŸ“Š Evaluation with RAGAS, LLM-as-judge
  • ⚑ Production optimization (caching, async)

Real-world Applications:

  • Knowledge base Q&A systems
  • Document analysis pipelines
  • Multi-modal retrieval (text + images)
  • Enterprise search solutions

🧬 Fine-tuning & Model Adaptation

Techniques:

  • 🎯 QLoRA & PEFT: Parameter-efficient training
  • πŸ”§ LoRA: Low-rank adaptation for LLMs
  • ⚑ Unsloth: 2x faster fine-tuning
  • πŸ“Š Dataset Engineering: Instruction tuning, RLHF
  • 🎨 Domain Adaptation: Custom model specialization
  • πŸ“ˆ Evaluation: Perplexity, downstream tasks

Models Worked With:

  • Llama 2/3, Mistral, Phi-3
  • Gemma, Qwen, CodeLlama
  • Custom domain-specific models

🀝 Multi-Agent Systems

Frameworks & Patterns:

  • πŸ”— LangGraph: State machines, cyclic workflows
  • πŸ‘₯ CrewAI: Role-based agent collaboration
  • πŸ”„ AutoGen: Conversational agents
  • 🎭 OpenAI Agents SDK: Tool-using agents
  • 🧩 MCP (Model Context Protocol): Server integration

Agent Capabilities:

  • Tool use & function calling
  • Planning & reasoning (ReAct, Plan-and-Execute)
  • Memory systems (conversation, semantic)
  • Multi-agent orchestration
  • Human-in-the-loop workflows

πŸ› οΈ Production LLM Engineering

Deployment & Optimization:

  • πŸš€ API Development: FastAPI, streaming responses
  • 🐳 Containerization: Docker, model serving
  • ⚑ Inference Optimization: vLLM, TensorRT-LLM
  • πŸ“Š Monitoring: Latency, token usage, costs
  • πŸ” Security: Input validation, output filtering
  • πŸ’° Cost Management: Caching, batching, fallbacks

Integration Patterns:

  • WebSocket for real-time streaming
  • Async processing with Celery/Redis
  • Rate limiting & queue management
  • A/B testing for prompts/models

🌟 "Ship fast, iterate faster, deploy production-ready AI."

β€” Mohamed Tamer

Pinned Loading

  1. Next_Hire_AI Next_Hire_AI Public

    πŸš€AI-powered interview preparation made simple and effective. Fast, intuitive, and built for your career goals.

    TypeScript 1