Kickstart your microservices projects with secure authentication, scalable services, automated CI/CD pipelines, and built-in monitoring.
🚀 Version 1.0.1 — stable release
❗ Note: initial/primitive release may contain minor issues.
MicroForge began as my first end-to-end cloud learning platform — a practical environment where I taught myself to design, deploy, and operate real infrastructure instead of learning through isolated exercises. It captures the foundational engineering lessons that shaped how I now approach systems: secure-by-design workflows, reproducible environments, GitOps-driven automation, and observability as a core requirement.
This project is intentionally hands-on and iterative. Every component reflects a decision, a trade-off, or a failure that forced clarity. MicroForge is not meant to be polished; it’s meant to be real. It documents the exact engineering patterns that built my current technical foundation.
MicroForge is a cloud-native microservices template that gives you a reproducible base for building service-oriented systems. It includes:
- IaC: Terraform modules for PostgreSQL RDS, AWS Secrets Manager, and Kubernetes infra.
- Kubernetes: manifests for Deployments, Services, Namespaces, ServiceAccounts and (optionally) IRSA/OIDC roles.
- CI/CD: GitHub Actions workflows for linting, testing, building Docker images and optional deployments.
- Security: environment-specific secrets, password hashing, and optional HIBP checks.
- Observability: Prometheus metrics, Loki JSON logs, Grafana dashboards (provisionable via Helm).
- Microservices: `auth_service` (complete auth flow) and `users-api` (scaffold).
Workflow summary:
lint→ static checks (ruff / mypy / black)test→ unit + integration tests (pytest)build→ Docker image build & tagpublish→ push image to registry (optional)deploy→ manual/automated deployment to staging/production
Monitoring is provided and intentionally scoped to two dedicated namespaces so you can compare Dev vs Prod easily:
- Namespaces monitored:
auth-dev— development / staging environmentauth-prod— production environment
- Monitoring stack (recommended):
- Prometheus (kube-prometheus-stack) — scrapes service endpoints and kube metrics
- Loki (loki-stack) — collects structured JSON logs
- Grafana — dashboards for latency, throughput, errors; dashboards are pre-bundled and can be provisioned
- Prometheus is configured to scrape metrics from pods/services in the namespaces auth-dev and auth-prod. Use the Prometheus Helm values file at monitoring/prometheus-values.yaml to set the namespaceSelectors/namespaceRegex or static targets.
- Loki is installed with a values file at monitoring/values.yaml and configured to collect pod logs cluster-wide but dashboards are filtered by namespace.
- Grafana contains pre-made dashboards that use the namespace label so you can switch between auth-dev and auth-prod views.
# add chart repos
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# install Loki (replace monitoring/values.yaml with your values)
helm upgrade --install loki-stack grafana/loki-stack -n monitoring -f monitoring/values.yaml --create-namespace
# install kube-prometheus-stack (replace monitoring/prometheus-values.yaml with your values)
helm upgrade --install prom-stack prometheus-community/kube-prometheus-stack -n monitoring -f monitoring/prometheus-values.yamlIn your monitoring/prometheus-values.yaml you can add a serviceMonitor or modify the namespaceSelector so Prometheus scrapes only auth-dev and auth-prod. Example snippet:
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelector:
matchExpressions:
- {key: kubernetes.io/metadata.name, operator: In, values: ["auth-dev", "auth-prod"]}(Adjust according to the chart version — the repo contains example values.)
# port-forward Grafana (example service name for kube-prometheus-stack)
kubectl port-forward svc/prom-stack-grafana -n monitoring 3000:80
# then open http://localhost:3000- Docker & Docker Compose (for local/demo)
- Python 3.11+
- PostgreSQL (local or managed) — or use the provided Docker image
kubectl,helm(if testing Kubernetes/Helm flows)- (Optional) AWS CLI + credentials for Terraform / real deployments
# create dev namespace
kubectl apply -f k8s/namespaces/auth-dev.yaml
# create prod namespace (if you want to test prod layout too)
kubectl apply -f k8s/namespaces/auth-prod.yamlExample content for the namespace manifest (k8s/namespaces/auth-dev.yaml):
apiVersion: v1
kind: Namespace
metadata:
name: auth-dev
labels:
environment: dev# deploy Postgres into auth-dev
kubectl apply -n auth-dev -f ./k8s/postgres/
# deploy auth service into auth-dev
kubectl apply -n auth-dev -f ./k8s/auth_service/
# deploy users-api into auth-dev
kubectl apply -n auth-dev -f ./k8s/users-api/Tip: if manifests already include a namespace: field, -n is still ok; keep the YAMLs consistent.
kubectl get pods -n auth-dev
kubectl get svc -n auth-dev
kubectl get deploy -n auth-dev# get the auth-service pod name
POD_AUTH=$(kubectl get pods -n auth-dev -l app=auth-service -o jsonpath='{.items[0].metadata.name}')
# forward auth-service pod (example: pod exposes 8000)
kubectl port-forward -n auth-dev $POD_AUTH 8000:8000 &
echo "auth service forwarded at http://localhost:8000"
# get postgres pod name and forward
POD_PG=$(kubectl get pods -n auth-dev -l app=postgres -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward -n auth-dev $POD_PG 5432:5432 &
echo "postgres forwarded at localhost:5432"# register a user
curl -s -X POST http://localhost:8000/register \
-H "Content-Type: application/json" \
-d '{"email":"tester@example.com","password":"Test1234!"}' | jq
# request a token (form style)
curl -s -X POST http://localhost:8000/token \
-d "username=tester@example.com&password=Test1234!" | jq- If your test suite expects pods and DB, run the tests from a ephemeral pod with the repo mounted or using an image that has pytest and dependencies:
# run tests from a temporary pod mounting current repo (requires accessible files from runner)
kubectl run -n auth-dev test-runner --rm -i --tty --image=python:3.11 -- bash -c "
pip install -r /tmp/repo/auth_service/requirements-test.txt &&
pytest /tmp/repo/auth_service/tests -q
"(Alternative: kubectl exec into an existing test pod if you have one.)
If you prefer to test locally with Docker, use a dedicated Docker network so containers can talk to each other.
docker network create microforge-net || truedocker run -d --name mg-postgres --network microforge-net \
-e POSTGRES_USER=authuser -e POSTGRES_PASSWORD=authpass -e POSTGRES_DB=authdb \
-p 5432:5432 postgres:13docker run -d --name auth-service --network microforge-net \
-e POSTGRES_HOST=mg-postgres -e POSTGRES_USER=authuser -e POSTGRES_PASSWORD=authpass -e POSTGRES_DB=authdb \
-p 8000:8000 gilbr/auth-service:latest
docker run -d --name users-api --network microforge-net \
-e POSTGRES_HOST=mg-postgres -e POSTGRES_USER=authuser -e POSTGRES_PASSWORD=authpass -e POSTGRES_DB=authdb \
-p 8080:8080 gilbr/users-api:latestEnvironment used by the workflow's PostgreSQL service:
POSTGRES_USER=testuser
POSTGRES_PASSWORD=testpass
POSTGRES_DB=testdbDatabase URL for tests (used in CI jobs):
postgresql+asyncpg://testuser:testpass@localhost:5432/testdbEnvironment variables used in CI:
DATABASE_URL — See PostgreSQL test configuration above.
PYTHONPATH — add service src dirs when running tests locally (example in workflow). The GitHub Actions workflow runs on PRs and pushes to dev / main. Typical steps:
lint— ruff / mypy / blacktest— pytest (unit + integration) using the test Postgres servicebuild— build Docker images (local or CI registry)publish— optional push to container registrydeploy— manual or automated promotion (staging → prod)
- Auth Service (
auth_service) — Full authentication flow: registration, email verification, JWT login/refresh, password hashing, user management. - Users API (
users-api) — Minimal scaffold service: health endpoint, basic CRUD layout, designed to be copied & extended.
Run tests locally (example):
# run pytest
pytest -q
# run lint/static checks
ruff check .
black --check .
mypy srcFor integration tests that depend on pods (Postgres/Services), prefer running tests from inside the cluster (see Kubernetes instructions above) or create ephemeral containers that connect to the running Postgres container.
Planned near-term improvements:
- Future releases will pin versions; 1.0 is stable. streamline Helm charts for demo deploys
- Add OAuth2 / social login options
- Add alerting rules for Prometheus + Grafana alertmanager
- Harden Terraform modules; add automated IaC tests
This template is intended for learning, inspiration, and building new projects. If you'd like to contribute improvements:
- Open an issue describing the change / improvement.
- Send a PR against the
devbranch. - Respect the license: contact the author before public redistribution or claiming work as your own.
This project is a template created by Gilbert Ramírez (GitHub: https://github.com/MetalCloud1).
License: CC BY-NC-ND (custom) — full terms in LICENSE.md.
You may:
- View, study, and use this template for personal, educational, or inspiration purposes.
- Modify or extend it; substantial transformations that add new functionality may be used as your own work if you properly acknowledge the original template.
You may NOT:
- Claim the original template as entirely your own in resumes/portfolios without prior notice to the author.
- Sell, redistribute, or deploy the original template commercially without consent.
- Docker Compose is useful for quick demos (ephemeral). For more realistic tests use Kubernetes + Helm.
- Monitoring dashboards are pre-made and filtered by namespace — use
auth-devvsauth-prodto compare behavior. - Keep a
k8s/namespaces/folder with namespace manifests so applying the same namespace is reproducible. - Before running a rebase/squash, create a backup branch:
git branch backup-main.
Built with ❤️ by Gilbert Ramírez — github.com/MetalCloud1
```