- Building production-style AI/ML projects with GCP + FastAPI + Gemini
- Focused on interview-ready fundamentals: SQL, Python debugging, APIs, data pipelines, ML evaluation
- Practicing the zero-cost path: open-source, free tiers, public proof-of-work
- Turning every bug into a documented artifact:
- Symptom
- Root cause
- Fix
- Prevention
- Impact
Input data -> Validation -> Pipeline -> Model/API -> Monitoring -> Interview story
- GCP services for data and deployment
- FastAPI backend design and reliability
- Gemini integration patterns
- Observability, retries, and failure handling
| Project | What it shows |
|---|---|
gcp-genai-daily-grind |
Daily consistency, GenAI experiments, practical implementation |
gcp-loan-data-pipeline |
Data pipeline design, validation, and reliability thinking |
bigquery-sql-analytics |
SQL correctness and analytics interview signal |
ETL-pipeline-on-GCP |
End-to-end ETL workflows with cloud tooling |
For each serious project, I prepare:
- 30-second explanation
- 90-second STAR story
- 3-minute deep technical walkthrough
This keeps my work understandable at multiple depths, like real interviews.
How I Debug (Production Style)
- Reproduce the issue with the smallest input
- Isolate stage: data, API, model, infra, or integration
- Add logs and checks where the failure first becomes visible
- Fix and validate both happy path and failure path
- Write a short postmortem note with prevention action
Build real systems. Debug real failures. Explain tradeoffs clearly.