Skip to content
/ MicroForge Public template

🗡️MicroForge: Cloud-native microservices template with secure authentication, automated CI/CD, Terraform-managed infrastructure (EKS, PostgreSQL), and monitoring.

License

Notifications You must be signed in to change notification settings

MetalCloud1/MicroForge

Repository files navigation

⚒️MicroForge🗡️

Kickstart your microservices projects with secure authentication, scalable services, automated CI/CD pipelines, and built-in monitoring.

CI/CD Version Python Docker Template

🚀 Version 1.0.1 — stable release

❗ Note: initial/primitive release may contain minor issues.

🧭 About MicroForge (context)

MicroForge began as my first end-to-end cloud learning platform — a practical environment where I taught myself to design, deploy, and operate real infrastructure instead of learning through isolated exercises. It captures the foundational engineering lessons that shaped how I now approach systems: secure-by-design workflows, reproducible environments, GitOps-driven automation, and observability as a core requirement.

This project is intentionally hands-on and iterative. Every component reflects a decision, a trade-off, or a failure that forced clarity. MicroForge is not meant to be polished; it’s meant to be real. It documents the exact engineering patterns that built my current technical foundation.

🔍 Project Overview

MicroForge is a cloud-native microservices template that gives you a reproducible base for building service-oriented systems. It includes:

  • IaC: Terraform modules for PostgreSQL RDS, AWS Secrets Manager, and Kubernetes infra.
  • Kubernetes: manifests for Deployments, Services, Namespaces, ServiceAccounts and (optionally) IRSA/OIDC roles.
  • CI/CD: GitHub Actions workflows for linting, testing, building Docker images and optional deployments.
  • Security: environment-specific secrets, password hashing, and optional HIBP checks.
  • Observability: Prometheus metrics, Loki JSON logs, Grafana dashboards (provisionable via Helm).
  • Microservices: `auth_service` (complete auth flow) and `users-api` (scaffold).

🏗️ Architecture

Project Architecture

📂 Project Structure


1️⃣ Project Overview

Project Overview

2️⃣

2️⃣ Auth Service

Auth Service

3️⃣ Monitoring

Monitoring

4️⃣ Terraform

Terraform

5️⃣ Template / Demo Service

Demo Service


🔄 CI/CD Pipeline

CI/CD Pipeline

Workflow summary:

  1. lint → static checks (ruff / mypy / black)
  2. test → unit + integration tests (pytest)
  3. build → Docker image build & tag
  4. publish → push image to registry (optional)
  5. deploy → manual/automated deployment to staging/production

🛰️ Observability & Monitoring (clear scope)

Monitoring is provided and intentionally scoped to two dedicated namespaces so you can compare Dev vs Prod easily:

  • Namespaces monitored:
    • auth-dev — development / staging environment
    • auth-prod — production environment
  • Monitoring stack (recommended):
    • Prometheus (kube-prometheus-stack) — scrapes service endpoints and kube metrics
    • Loki (loki-stack) — collects structured JSON logs
    • Grafana — dashboards for latency, throughput, errors; dashboards are pre-bundled and can be provisioned

How monitoring is configured


- Prometheus is configured to scrape metrics from pods/services in the namespaces auth-dev and auth-prod. Use the Prometheus Helm values file at monitoring/prometheus-values.yaml to set the namespaceSelectors/namespaceRegex or static targets. - Loki is installed with a values file at monitoring/values.yaml and configured to collect pod logs cluster-wide but dashboards are filtered by namespace. - Grafana contains pre-made dashboards that use the namespace label so you can switch between auth-dev and auth-prod views.

Quick install (Helm)

# add chart repos
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# install Loki (replace monitoring/values.yaml with your values)
helm upgrade --install loki-stack grafana/loki-stack -n monitoring -f monitoring/values.yaml --create-namespace

# install kube-prometheus-stack (replace monitoring/prometheus-values.yaml with your values)
helm upgrade --install prom-stack prometheus-community/kube-prometheus-stack -n monitoring -f monitoring/prometheus-values.yaml

Notes: ensuring Prometheus scrapes only the auth namespaces

In your monitoring/prometheus-values.yaml you can add a serviceMonitor or modify the namespaceSelector so Prometheus scrapes only auth-dev and auth-prod. Example snippet:

prometheus:
  prometheusSpec:
    serviceMonitorSelectorNilUsesHelmValues: false
    serviceMonitorSelector:
      matchExpressions:
        - {key: kubernetes.io/metadata.name, operator: In, values: ["auth-dev", "auth-prod"]}

(Adjust according to the chart version — the repo contains example values.)

Access Grafana

# port-forward Grafana (example service name for kube-prometheus-stack)
kubectl port-forward svc/prom-stack-grafana -n monitoring 3000:80
# then open http://localhost:3000

⚡ Quick Start

Pre-requisites

  • Docker & Docker Compose (for local/demo)
  • Python 3.11+
  • PostgreSQL (local or managed) — or use the provided Docker image
  • kubectl, helm (if testing Kubernetes/Helm flows)
  • (Optional) AWS CLI + credentials for Terraform / real deployments

1) Create namespaces

# create dev namespace
kubectl apply -f k8s/namespaces/auth-dev.yaml

# create prod namespace (if you want to test prod layout too)
kubectl apply -f k8s/namespaces/auth-prod.yaml

Example content for the namespace manifest (k8s/namespaces/auth-dev.yaml):

apiVersion: v1
kind: Namespace
metadata:
  name: auth-dev
  labels:
    environment: dev

2) Deploy Postgres and services (relative paths)

# deploy Postgres into auth-dev
kubectl apply -n auth-dev -f ./k8s/postgres/

# deploy auth service into auth-dev
kubectl apply -n auth-dev -f ./k8s/auth_service/

# deploy users-api into auth-dev
kubectl apply -n auth-dev -f ./k8s/users-api/

Tip: if manifests already include a namespace: field, -n is still ok; keep the YAMLs consistent.

3) Verify pods & services

kubectl get pods -n auth-dev
kubectl get svc -n auth-dev
kubectl get deploy -n auth-dev

4) Port-forward for local testing

# get the auth-service pod name
POD_AUTH=$(kubectl get pods -n auth-dev -l app=auth-service -o jsonpath='{.items[0].metadata.name}')

# forward auth-service pod (example: pod exposes 8000)
kubectl port-forward -n auth-dev $POD_AUTH 8000:8000 &
echo "auth service forwarded at http://localhost:8000"

# get postgres pod name and forward
POD_PG=$(kubectl get pods -n auth-dev -l app=postgres -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward -n auth-dev $POD_PG 5432:5432 &
echo "postgres forwarded at localhost:5432"

5) Smoke tests (curl)

# register a user
curl -s -X POST http://localhost:8000/register \
  -H "Content-Type: application/json" \
  -d '{"email":"tester@example.com","password":"Test1234!"}' | jq

# request a token (form style)
curl -s -X POST http://localhost:8000/token \
  -d "username=tester@example.com&password=Test1234!" | jq

6) Running tests inside the cluster (recommended for integration tests)

  • If your test suite expects pods and DB, run the tests from a ephemeral pod with the repo mounted or using an image that has pytest and dependencies:
# run tests from a temporary pod mounting current repo (requires accessible files from runner)
kubectl run -n auth-dev test-runner --rm -i --tty --image=python:3.11 -- bash -c "
  pip install -r /tmp/repo/auth_service/requirements-test.txt &&
  pytest /tmp/repo/auth_service/tests -q
"

(Alternative: kubectl exec into an existing test pod if you have one.)


🐳 Quick Start — Docker (alternative)

If you prefer to test locally with Docker, use a dedicated Docker network so containers can talk to each other.

1) Create Docker network

docker network create microforge-net || true

2) Start Postgres

docker run -d --name mg-postgres --network microforge-net \
  -e POSTGRES_USER=authuser -e POSTGRES_PASSWORD=authpass -e POSTGRES_DB=authdb \
  -p 5432:5432 postgres:13

3) Start services (default images)

docker run -d --name auth-service --network microforge-net \
  -e POSTGRES_HOST=mg-postgres -e POSTGRES_USER=authuser -e POSTGRES_PASSWORD=authpass -e POSTGRES_DB=authdb \
  -p 8000:8000 gilbr/auth-service:latest

docker run -d --name users-api --network microforge-net \
  -e POSTGRES_HOST=mg-postgres -e POSTGRES_USER=authuser -e POSTGRES_PASSWORD=authpass -e POSTGRES_DB=authdb \
  -p 8080:8080 gilbr/users-api:latest

🗄️ PostgreSQL test configuration

Environment used by the workflow's PostgreSQL service:

POSTGRES_USER=testuser
POSTGRES_PASSWORD=testpass
POSTGRES_DB=testdb

Database URL for tests (used in CI jobs):

postgresql+asyncpg://testuser:testpass@localhost:5432/testdb

Environment variables used in CI:

DATABASE_URL — See PostgreSQL test configuration above.
PYTHONPATH — add service src dirs when running tests locally (example in workflow).

🔄 CI/CD pipeline (high level)

The GitHub Actions workflow runs on PRs and pushes to dev / main. Typical steps:

  1. lint — ruff / mypy / black
  2. test — pytest (unit + integration) using the test Postgres service
  3. build — build Docker images (local or CI registry)
  4. publish — optional push to container registry
  5. deploy — manual or automated promotion (staging → prod)

📦 Microservices

  • Auth Service (auth_service) — Full authentication flow: registration, email verification, JWT login/refresh, password hashing, user management.
  • Users API (users-api) — Minimal scaffold service: health endpoint, basic CRUD layout, designed to be copied & extended.

🧪 Testing & Linting

Run tests locally (example):

  # run pytest
pytest -q

# run lint/static checks
ruff check .
black --check .
mypy src

For integration tests that depend on pods (Postgres/Services), prefer running tests from inside the cluster (see Kubernetes instructions above) or create ephemeral containers that connect to the running Postgres container.


📍 Roadmap

Roadmap

Planned near-term improvements:

  • Future releases will pin versions; 1.0 is stable. streamline Helm charts for demo deploys
  • Add OAuth2 / social login options
  • Add alerting rules for Prometheus + Grafana alertmanager
  • Harden Terraform modules; add automated IaC tests

🤝 Contributing & License

Contributing (short)

This template is intended for learning, inspiration, and building new projects. If you'd like to contribute improvements:

  • Open an issue describing the change / improvement.
  • Send a PR against the dev branch.
  • Respect the license: contact the author before public redistribution or claiming work as your own.

License (short)

This project is a template created by Gilbert Ramírez (GitHub: https://github.com/MetalCloud1).

License: CC BY-NC-ND (custom) — full terms in LICENSE.md.

You may:

  • View, study, and use this template for personal, educational, or inspiration purposes.
  • Modify or extend it; substantial transformations that add new functionality may be used as your own work if you properly acknowledge the original template.

You may NOT:

  • Claim the original template as entirely your own in resumes/portfolios without prior notice to the author.
  • Sell, redistribute, or deploy the original template commercially without consent.

📝 Notes & tips

  • Docker Compose is useful for quick demos (ephemeral). For more realistic tests use Kubernetes + Helm.
  • Monitoring dashboards are pre-made and filtered by namespace — use auth-dev vs auth-prod to compare behavior.
  • Keep a k8s/namespaces/ folder with namespace manifests so applying the same namespace is reproducible.
  • Before running a rebase/squash, create a backup branch: git branch backup-main.

Built with ❤️ by Gilbert Ramírez — github.com/MetalCloud1

```