Production-style EKS baseline with Terraform, IRSA, ALB ingress, and CloudWatch observability.
This repo provisions a production-style Kubernetes baseline on AWS using EKS:
- VPC with public/private subnets across AZs
- EKS control plane + managed node group (private)
- IRSA (IAM Roles for Service Accounts)
- Core add-ons:
- EKS managed add-ons: vpc-cni, coredns, kube-proxy, aws-ebs-csi-driver
- AWS Load Balancer Controller (Ingress -> ALB) via Helm
- Observability (CloudWatch):
- CloudWatch Agent (metrics / Container Insights)
- Fluent Bit (pod logs -> CloudWatch Logs)
This setup follows a common production pattern: private worker nodes, public ingress via ALB, and IRSA for least-privilege access.
envs/dev: environment wrapper (backend, providers, variables)modules/network: VPC + subnets + NATmodules/eks: EKS cluster + node groups + OIDC providermodules/addons: add-ons + IRSA + Helm releases
- Remote state
- Create remote state (S3 bucket + DynamoDB table) OR keep
backend.tfdisabled.
- Copy tfvars:
cp terraform.tfvars.example terraform.tfvars
- Phase A — Create VPC + EKS (no Helm/Kubernetes providers yet)
terraform initterraform apply -var="enable_k8s_providers=false"
- Phase B — Install add-ons (ALB Controller + CloudWatch) after cluster exists
- Set
enable_k8s_providers = trueinterraform.tfvars terraform apply
Bootstrap note: Kubernetes/Helm providers require the cluster endpoint + token, so add-ons are installed in Phase B.
Why two phases? The Kubernetes/Helm providers need the cluster endpoint and auth token, which only exist after EKS is created.
This project provisions real AWS infrastructure:
- EKS control plane (hourly)
- EC2 worker nodes
- NAT Gateway
- CloudWatch metrics and log ingestion
Remember to destroy resources when finished:
terraform destroy
| Variable | Purpose |
|---|---|
| enable_k8s_providers | Enables Kubernetes/Helm providers + add-ons |
| enable_cloudwatch | Enables CloudWatch Agent + Fluent Bit |
enable_k8s_providers=false→ only VPC + EKSenable_k8s_providers=true→ installs add-ons via Helm/Kubernetes providers
After apply:
aws eks update-kubeconfig --region <region> --name <cluster_name>kubectl get nodeskubectl get pods -A
- Logs: CloudWatch Logs group defaults to
/aws/eks/<cluster_name>/pods(via Fluent Bit config) - Metrics: Container Insights metrics via CloudWatch Agent in
amazon-cloudwatchnamespace