An infrastructure starter kit for Scaleway, built with Terragrunt and OpenTofu. A learning tool and starting point for deploying a secure, sovereign cloud platform on a European provider.
Internet
│
┌────┴────┐
│ Load │ ← Provisioned by the Scaleway Cloud Controller Manager (CCM)
│Balancer │ via the Envoy Gateway Service
└────┬────┘
│ TCP (proxy protocol v2)
┌─────────────┼─────────────┐
│ VPC / Private Network │
│ │ │
│ ┌─────────┴─────────┐ │
│ │ Kapsule │ │
│ │ (Kubernetes) │ │
│ │ │ │
│ │ Envoy Gateway ←───── TLS termination (Let's Encrypt via cert-manager)
│ │ │ │ │
│ │ ┌────┴────────┐ │ │
│ │ │ Sovereign │ │ │
│ │ │ Cloud Wisdom│ │ │
│ │ └──────┬──────┘ │ │
│ └─────────┼─────────┘ │
│ │ │
│ ┌─────────┴─────────┐ │
│ │ PostgreSQL │ │
│ │ (Managed DB) │ │
│ └───────────────────┘ │
└───────────────────────────┘
Secret Manager Container Registry
(DB credentials + (Docker images)
API auth token)
| Component | Description | Security |
|---|---|---|
| VPC + Private Network | Isolated network with a 172.16.0.0/22 subnet. All resources communicate over private IPs only. |
Network isolation for all internal resources |
| Kapsule | Managed Kubernetes cluster with Cilium CNI, autoscaling (1–3 nodes), and autohealing. | Attached to private network, no public node exposure |
| PostgreSQL | Managed database (PostgreSQL 16) with automated backups (daily, 7-day retention). | Private network only — no public endpoint. Password managed via Secret Manager. |
| Envoy Gateway | Kubernetes Gateway API implementation using Envoy proxy. Routes traffic based on HTTPRoute rules. Exposed via a CCM-managed Scaleway Load Balancer. | TLS termination via cert-manager (Let's Encrypt). The LB is the only externally reachable component. |
| cert-manager | Automates Let's Encrypt certificate lifecycle: request, DNS-01 challenge validation (via cert-manager-webhook-scaleway), storage as K8s Secret, and auto-renewal. | Certificates stored as Kubernetes Secrets, never on disk |
| Secret Manager | Stores database credentials and API auth token. Terraform creates secret shells (name/tags); values are pushed via scripts/push-secrets.sh using the scw CLI, keeping them out of Terraform state. Synced to Kubernetes via External Secrets Operator. |
Secrets never hardcoded, never in Terraform state, injected at runtime |
| Container Registry | Private Docker image registry hosted on Scaleway. | Images stored in France, private access only |
| Cockpit | Managed observability platform (Grafana, Mimir, Loki, Tempo). Kapsule metrics collected automatically. | Data stays in France, managed by Scaleway |
Why not a Terraform-managed Load Balancer? This project initially used a Scaleway Load Balancer managed entirely by Terraform — a natural choice when your infrastructure-as-code tool is Terraform and you want everything in one dependency graph. It worked, but the backend configuration required hardcoding node IPs. This broke whenever Kapsule auto-upgraded nodes (the IPs changed) or the cluster autoscaler added a node (the new node wasn't in the backend list). By letting the Kubernetes Cloud Controller Manager (CCM) manage the Load Balancer instead, backends are updated automatically — node upgrades and autoscaling just work.
vpc
├── kapsule
└── database
secret-manager (independent)
registry (independent)
cockpit (independent)
infrastructure/
├── root.hcl # Shared Terragrunt config (S3 backend, provider)
├── modules/ # Reusable Terraform modules
│ ├── vpc/ # VPC + private network
│ ├── kapsule/ # Kubernetes cluster + node pool
│ ├── database/ # PostgreSQL managed database
│ ├── secret-manager/ # Scaleway Secret Manager
│ ├── registry/ # Scaleway Container Registry
│ └── cockpit/ # Scaleway Cockpit (observability)
└── dev/ # Dev environment
├── env.hcl # Environment-specific variables
├── vpc/terragrunt.hcl
├── kapsule/terragrunt.hcl
├── database/terragrunt.hcl
├── secret-manager-db-password/terragrunt.hcl
├── secret-manager-api-token/terragrunt.hcl
├── registry/terragrunt.hcl
└── cockpit/terragrunt.hcl
k8s/ # Kubernetes manifests
├── namespace.yaml
├── gateway/ # Envoy Gateway + TLS
│ ├── gatewayclass.yaml # GatewayClass (Envoy Gateway controller)
│ ├── envoyproxy.yaml # EnvoyProxy (Scaleway LB annotations)
│ ├── clienttrafficpolicy.yaml # PROXY protocol parsing
│ ├── gateway.yaml # Gateway (HTTP/HTTPS listeners)
│ ├── httproute-redirect.yaml # HTTP → HTTPS redirect (301)
│ └── cluster-issuer.yaml # cert-manager ClusterIssuer (DNS-01 solver)
├── external-secrets/ # External Secrets Operator config
│ ├── external-secret.yaml # DB password sync
│ └── api-auth-token.yaml # API auth token sync
└── app/ # Application deployment
├── deployment.yaml
├── service.yaml
└── httproute.yaml # App routing rules (Gateway API)
scripts/
├── validate.sh # Validation & security scanning
├── deploy.sh # Application deployment to Kapsule
├── push-secrets.sh # Push secret values to Secret Manager (bypasses TF state)
└── rotate-api-token.sh # Manual API token rotation
The root Terragrunt config (root.hcl) is environment-agnostic — all environment-specific values live in env.hcl. To add a new environment (staging, prod), just create a new directory with its own env.hcl.
- OpenTofu >= 1.6.0
- Terragrunt >= 0.93.0
- kubectl
- Helm (for Envoy Gateway, cert-manager, External Secrets Operator)
- jq
- A Scaleway account with API credentials
In the Scaleway console, create an Object Storage bucket:
- Name:
scaleway-starter-kit - Region:
fr-par - Visibility: Private
cp .env.example .envEdit .env with your Scaleway credentials and database password:
export SCW_ACCESS_KEY=<your-access-key>
export SCW_SECRET_KEY=<your-secret-key>
export SCW_DEFAULT_ORGANIZATION_ID=<your-org-id>
export SCW_DEFAULT_PROJECT_ID=<your-project-id>
export TF_VAR_db_password=<a-secure-password>
export TF_VAR_api_auth_token=<generate-with-openssl-rand-hex-32>
export KUBECONFIG="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/infrastructure/dev/.kubeconfig"Then load it:
source .envcd infrastructure/dev
terragrunt run --all applyTerragrunt will deploy in order: VPC -> Kapsule + Database (parallel). Secret Manager (shells only), Container Registry, and Cockpit are independent and deploy in parallel with the rest.
Terraform creates empty secret shells. Push the actual values via the scw CLI (keeps them out of Terraform state):
./scripts/push-secrets.shUse --dry-run to preview, or pass secret names to push selectively:
./scripts/push-secrets.sh --dry-run
./scripts/push-secrets.sh dev-db-passwordAfter the Kapsule cluster is deployed:
cd infrastructure/dev/kapsule
terragrunt output -json kubeconfig | jq -r '.[0].config_file' > ../.kubeconfig
chmod 600 ../.kubeconfigThen connect to the cluster:
kubectl get nodesThe starter kit includes Kubernetes manifests for Sovereign Cloud Wisdom, a demo application that serves curated wisdom about European digital sovereignty.
The app Docker image must be built and pushed to the Container Registry first (see the app repository).
Run the deployment script:
./scripts/deploy.shThe script will:
- Install Envoy Gateway — Kubernetes Gateway API implementation using Envoy proxy
- Apply Gateway API resources — GatewayClass, EnvoyProxy, Gateway, HTTPRoutes (creates a Scaleway Load Balancer via the CCM)
- Install cert-manager — automates Let's Encrypt TLS certificates (with Gateway API support)
- Install cert-manager-webhook-scaleway — DNS-01 solver via Scaleway DNS API
- Install External Secrets Operator — syncs secrets from Scaleway Secret Manager
- Create Kubernetes secrets (registry pull credentials, Scaleway API access for ESO and DNS-01)
- Create a
ClusterSecretStorepointing to Scaleway Secret Manager - Sync the database password and API auth token as Kubernetes secrets via
ExternalSecret - Create the app
ConfigMapwith database connection details (fetched from Terragrunt outputs) - Deploy the application (Deployment + ClusterIP Service + HTTPRoute)
- Print the Load Balancer address for DNS configuration
Configure DNS:
After the script completes, it prints the Load Balancer hostname. Create a DNS record:
sovereigncloudwisdom.eu A -> <LB IP>
Note: Apex domains cannot use CNAME records (DNS specification). Resolve the LB hostname to get the IP:
dig +short <LB hostname>.
Once DNS propagates, cert-manager automatically obtains a Let's Encrypt certificate via DNS-01. Check progress:
kubectl get certificate -n envoy-gateway-systemVerify:
curl https://sovereigncloudwisdom.eu/Retrieve the API auth token (for use in client applications):
kubectl get secret api-auth-token -n sovereign-wisdom -o jsonpath='{.data.api-token}' | base64 -d; echoTo rotate the token manually:
./scripts/rotate-api-token.shCockpit is Scaleway's managed observability platform. Kapsule metrics are collected automatically at no cost, and is very easy to set up.
cd infrastructure/dev/cockpit
terragrunt output grafana_urlOpen the Grafana URL and log in with your Scaleway IAM credentials. Pre-configured dashboards for Kapsule are available under the Scaleway folder.
-
Create a new environment directory:
mkdir -p infrastructure/staging
-
Copy and adjust
env.hcl:cp infrastructure/dev/env.hcl infrastructure/staging/env.hcl # Edit values (instance sizes, cluster name, etc.) -
Copy the child module configs (they're identical — all values come from
env.hcl):for module in vpc kapsule database secret-manager-db-password secret-manager-api-token registry cockpit; do mkdir -p "infrastructure/staging/$module" cp "infrastructure/dev/$module/terragrunt.hcl" "infrastructure/staging/$module/" done
-
Deploy:
cd infrastructure/staging terragrunt run --all apply
A validation script checks formatting, configuration, dependencies, linting, and security:
source .env
./scripts/validate.shThe script runs the following checks:
| Check | Tool | Description |
|---|---|---|
| HCL format | terragrunt hcl fmt |
Ensures consistent formatting |
| Terraform validation | terragrunt validate |
Validates resource configurations |
| Dependency graph | terragrunt dag graph |
Detects circular dependencies |
| Linting | tflint |
Catches common Terraform mistakes |
| Security scan | trivy |
Flags security misconfigurations (HIGH/CRITICAL) |
Optional tools (tflint, trivy) are skipped if not installed:
brew install tflint trivyTo validate a different environment:
./scripts/validate.sh infrastructure/stagingThis project is designed with European data sovereignty in mind. All resources are deployed exclusively in France (fr-par), using Scaleway — a French cloud provider not subject to US extraterritorial surveillance laws (CLOUD Act, FISA Section 702).
For a non-technical overview of why sovereign cloud matters, see:
- WHY-SOVEREIGN-CLOUD.md (English)
- WHY-SOVEREIGN-CLOUD.fr.md (French)
For details on how this project addresses GDPR, SecNumCloud, NIS2, and DORA requirements, see:
- COMPLIANCE.md (English)
- COMPLIANCE.fr.md (French)
This starter kit is a foundation, not a turnkey production setup. You would still need to add:
- GitOps workflow (ArgoCD, Flux)
- CI/CD pipeline for infrastructure and application
- Network policies for fine-grained pod-to-pod traffic control
- Secure private network access (VPN or bastion) for reaching internal resources like the database
- Backup strategy beyond the managed database backups
- Web Application Firewall to protect against common threats
- And more, depending on your specific requirements
Important: Follow this order to avoid orphaned resources.
1. Delete Kubernetes resources first
Envoy Gateway creates a Scaleway Load Balancer via the CCM. If you destroy the cluster without removing it first, the LB becomes orphaned in your Scaleway account.
helm uninstall eg -n envoy-gateway-systemWait for the Load Balancer to disappear in the Scaleway console before proceeding.
2. Destroy the infrastructure
cd infrastructure/dev
terragrunt run --all destroy3. Clean up Scaleway secrets
Scaleway secrets survive terragrunt destroy. Delete them manually before redeploying, or you'll get a "cannot have same secret name" error:
scw secret secret list
scw secret secret delete <secret-id>