Hi, I'm Drew— a DevOps Architect specializing in AI/ML infrastructure and Kubernetes operations.
What you'll find here:
- Production-grade infrastructure code and architectural patterns
- 100+ Kubernetes deployment implementations
- AI/ML infrastructure with deep Nvidia expertise
- Open-source tools and automation frameworks
My Focus:
- Cloud infrastructure optimization (it is not unusual to reduce cloud spend by 50% for clients)
- Production-ready Kubernetes deployments
- LLM deployment and optimization
- Infrastructure as Code
- Unix philosophy, GNU ethos, cypherpunk minded, polymath of the old school.
Thousands of Hours of Nvidia Training and Practice
This portfolio demonstrates practical application of Nvidia technologies across multiple projects:
| Project | Nvidia Technologies Used |
|---|---|
| LLM Deployment Demos | Nvidia GPUs, CUDA optimization |
| AI Infrastructure Demos | Nvidia container runtime, MIG (Multi-Instance GPU) |
| MLOps Pipelines | Nvidia Triton Inference Server, RAPIDS |
Certifications: Multiple Nvidia Deep Learning and GPU Computing certifications
Looking for engineers with Nvidia expertise? My code demonstrates hands-on production experience.
Why Consider This Portfolio?
Deep Technical Expertise:
- Nvidia Technologies: Production experience with GPU-optimized infrastructure
- Kubernetes: 100+ deployment patterns across AWS, GCP, Azure
- AI/ML Infrastructure: Production LLM deployments, MLOps pipelines
- Modern IaC: Pulumi (Go), Terraform, GitOps practices
Open Source Contributions:
- RegicideOS — AI-native Rust Linux distribution
- Merlin — LLM router with reinforcement learning (Rust)
- efrit — Native elisp coding agent
- Voice of the Dead — SOTA text-to-speech
Technical Skills Demonstrated:
Contact for Recruiting:
- 🐙 GitHub Issues — Create an issue to reach out
- 📧 Use GitHub's email contact feature (if public on my profile)
I help organizations:
- Reduce cloud costs by up to 50%
- Accelerate AI/ML infrastructure deployment
- Migrate to Kubernetes with zero downtime
- Build production-ready MLOps pipelines
Proven Results:
"Reduced our AI costs by 60% while improving performance. The infrastructure overhaul was seamless and team training was invaluable." — CTO, FinTech Startup
"Helped us transition from legacy systems to Kubernetes with zero downtime. The migration strategy was brilliant and execution flawless." — VP Engineering, SaaS Company
How to Work With Me:
- Initial Consultation: Free 10-minute discovery call
- Engagement Models:
- Tier 1: Strategy & Planning — $250/hr, 10-hour minimum
- Infrastructure Assessment
- Cost Optimization Analysis
- Technology Roadmap
- Team Training
- Tier 2: Full Implementation — $5,000/project, exclusive to one client
- Complete Infrastructure Overhaul
- AI/ML Pipeline Development
- Kubernetes Migration
- Ongoing Support (retainer-based)
- Tier 1: Strategy & Planning — $250/hr, 10-hour minimum
📅 Schedule Free Consultation: cal.com/aiconsulting
What You Get:
- Production-ready code (see demos in this repo)
- Knowledge transfer and team training
- Ongoing support and optimization
- Transparent pricing and clear timelines
Production-Grade Infrastructure Patterns & Demos
| Directory | Description | Technologies | Highlights |
|---|---|---|---|
kubernetes/ |
100+ deployment patterns | K8s, EKS, GKE, Talos, Cilium | Multi-cloud, zero-trust, GPU-optimized |
llm/ |
AI/ML infrastructure | Mistral, OpenAI, Nvidia GPUs | Finetuning, inference, RAG pipelines |
pulumi-azure-tenant/ |
Multi-tenant IaC | Pulumi (Go), Azure | Secure, scalable patterns |
dagger-go-ci/ |
CI/CD pipelines | Dagger, Tekton, Go | Container-native, reproducible |
rust/ |
Rust CLI tools | Rust, Tokio | Performance-critical tools |
python/ |
Python best practices | Poetry, Type hints | Production-ready patterns |
# Clone the repository
git clone https://github.com/awdemos/demos.git
cd demos
# Explore available demos
ls -la demos/Certified & Battle-Tested
My portfolio demonstrates production use of Nvidia technologies across multiple domains:
- Container Optimization: Nvidia container runtime, GPU scheduling in Kubernetes
- MIG (Multi-Instance GPU): Partitioning for cost efficiency
- CUDA Workflows: GPU-accelerated ML pipelines
- Inference Optimization: TensorRT, Triton Inference Server
- Model Serving: Production deployments of Mistral, OpenAI models
- Resource Management: GPU memory optimization, batch processing
- RAPIDS Integration: GPU-accelerated data processing
- Jupyter on GPU: Production notebook environments
- Monitoring: GPU metrics, DCGM integration
Explore the Demos:
demos/llm/— LLM infrastructure with GPU optimizationdemos/kubernetes/— GPU-enabled Kubernetes deployments
- efrit — Native elisp coding agent running in Emacs. Nushell port in progress.
- Voice of the Dead — SOTA TTS project
- RegicideOS — AI-native, Rust-first, Linux distribution based on Gentoo, BtrFS, Cosmic-Desktop
- symbolic_ai_elisp_knowledge_base — Open-source reimagining of a Cyc style knowledge base
- Merlin — LLM router written in Rust. Utilizes RL to route LLM prompts. GPL 3.0 project.
- Regicide Dotfiles — Configuration files and dotfiles for RegicideOS
- DCAP — Dynamic Configuration and Application Platform
- Talos — Best in class Kubernetes OS
- Pulumi — Infrastructure as Code in general purpose programming languages
- vCluster — Virtual Kubernetes clusters
- Cilium — eBPF-based networking and security
- Cloudflare — Cost-effective cloud services
- Railway — Instant deployments, effortless scale
- GPTScript — Natural language scripting
- Claude Code — I use it daily
- pairup — AI Pair Programming in Neovim
- ComfyUI — Stable diffusion framework
- bincapz — Container image security
- Colima — Container runtime for macOS/Linux
- Dive — Docker image layer analysis
- Podman — Daemonless container engine
- nerdctl — Docker-compatible containerd CLI
- slim — Container image optimization (30x reduction)
- Kitty Terminal — Fast, GPU-accelerated terminal
- Cursor IDE — AI-powered development environment
- Devcontainer — Containerized development
- Devpod — Automated dev environments
- Chainguard — Software supply chain security
- GrapheneOS — Security-focused Android distribution
- NitroPC — Open-source secure PC
For Recruiters:
- 📧 Use GitHub's email (if public) or create an issue to reach out
- 📋 Review the Featured Projects for evidence of expertise
For Consulting:
- 📅 Schedule Free Consultation
- 💼 Review the For Consulting Clients section
Open Source:
- 🐙 Follow on GitHub for new projects
- ⭐ Star interesting projects to show appreciation
Exciting News! I'm launching my comprehensive AI School and Community on Patreon!
🔥 Limited Time Offer: Join our founding member community and get exclusive access to:
- In-depth AI/ML tutorials and hands-on workshops
- Live Q&A sessions with industry experts
- Private community of AI practitioners and enthusiasts
- Early access to cutting-edge AI tools and techniques
- Personalized mentorship and career guidance
👉 Join Now: AI School & Community on Patreon
The Mudskippers AI School rise above the competition! Be among the first to shape the future of AI education!
While this is my demo repository, create an issue if you would like to connect with me further!
All original code in this repository is released under the MIT License. Third-party components may have different licenses — please refer to their respective documentation.
© 2025 — Portfolio demonstrating AI infrastructure expertise.
🚀 Let's build something amazing together!
