Code used for Cloud Native Days Romania Amazon EKS Autoscaling Workshop
- AWS Account
- AWS CLI with the following configuration:
- Create a profile called
cndro2025with the following command:aws configure --profile cndro2025
- Enter the following details when prompted:
- AWS Access Key ID:
<AWS_ACCESS_KEY_ID> - AWS Secret Access Key:
<AWS_SECRET_ACCESS_KEY> - Default region name:
eu-central-1 - Default output format:
text
- AWS Access Key ID:
- The configuration will look like this in
~/.aws/config:[profile cndro2025] region = eu-central-1 output = text
- Create a profile called
- AWS S3 Bucket for storing terraform state.
It can be created using the AWS CLI with the following command:
aws s3api create-bucket --bucket <BUCKET_NAME> --region <REGION> --profile cndro2025 --create-bucket-configuration LocationConstraint="eu-central-1" --no-verify-ssl
- Terraform
- Kubectl
- Helm
-
Create the EKS Cluster:
- Navigate to the root directory and run the following command:
terraform init terraform apply
- This will create an EKS cluster with the specified configuration in
terraform.tfvars - It will also configure
kubectlto use the EKS cluster
- Navigate to the root directory and run the following command:
-
Install Kube Ops View
- Run the following command to install Kube Ops View:
kubectl create namespace kube-ops-view kubectl apply -f ./kube-ops-view-deployment kubectl get pod -n kube-ops-view
- This will deploy Kube Ops View in the
kube-ops-viewnamespace - You can access Kube Ops View using port forwarding:
Then, open your browser and navigate to
kubectl port-forward -n kube-ops-view service/kube-ops-view 8080:80
http://localhost:8080
- Run the following command to install Kube Ops View:
-
Deploy a Sample App
- Run the following command to deploy an application and expose as a service on TCP port 80:
kubectl create deployment php-apache --image=eu.gcr.io/k8s-artifacts-prod/hpa-example kubectl set resources deployment php-apache --requests=cpu=200m,memory=128Mi kubectl expose deployment php-apache --port=80 kubectl get pod -l app=php-apache - The application is a custom-built image based on the php-apache image. The index.php page performs calculations to generate CPU load. More information can be found here
- Create an HPA resource. This HPA scales up when CPU exceeds 50% of the allocated container resources
View the HPA using kubectl. You probably will see
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 kubectl get hpa
<unknown>/50%for 1-2 minutes and then you should be able to see0%/50%
- Run the following command to deploy an application and expose as a service on TCP port 80:
-
Configure Cluster Autoscaler (CA)
- Prepare your environment for the Cluster Autoscaler. First, we will need to create AWS IAM role to be used by CA.
For that, we will need to open
terraform.tfvarsand set the following variables:After that, we will need to run terraform again:enable_eks_cluster_autoscaler = true
terraform apply
- Then, we will need to create our kubectl manifest for the CA. For that, we will run the following commands:
export CLUSTER_NAME=$(terraform output -raw eks_cluster_name) export AWS_REGION=$(terraform output -raw aws_region) export EKS_CLUSTER_AUTOSCALER_IAM_ROLE_ARN=$(terraform output -raw eks_cluster_autoscaler_role_arn) helm repo add autoscaler https://kubernetes.github.io/autoscaler helm template cndro autoscaler/cluster-autoscaler \ --namespace kube-system \ --set "autoDiscovery.clusterName=$CLUSTER_NAME" \ --set "awsRegion=$AWS_REGION" \ --set "rbac.serviceAccount.name=cluster-autoscaler-sa" \ --set "rbac.serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn=$EKS_CLUSTER_AUTOSCALER_IAM_ROLE_ARN" > autoscaling/cluster-autoscaler.yml
- After that, we will need to apply the manifest:
kubectl apply -f autoscaling/cluster-autoscaler.yml
- Finally, we will need to check if the CA is running and watch the logs:
kubectl get deployment -n kube-system cndro-aws-cluster-autoscaler
- Prepare your environment for the Cluster Autoscaler. First, we will need to create AWS IAM role to be used by CA.
For that, we will need to open
-
Test Cluster Autoscaler with HPA
- Add some load to our Sample application to trigger the HPA and CA. For that we will use and additional container to generate load.
Inside the container, run the following command to generate load:
kubectl run -i --tty load-generator --image=busybox /bin/sh
while true; do wget -q -O- http://php-apache; done
- You can watch the HPA scaling up the pods:
You will see HPA scale the pods from 1 up to our configured maximum (10) until the CPU average is below our target (50%)
kubectl get hpa -w
- You should see the CA scaling up the nodes in the EKS cluster. You can check the logs of the CA pod to see the scaling events:
kubectl logs -f -n kube-system deployment/cndro-aws-cluster-autoscaler
- You can also check the EKS console to see the new nodes being added to the cluster or in the Kube Ops View dashboard.
- To stop the load generator, you can press
Ctrl+C. This will stop the load generation and allow the HPA to scale down the pods and CA to scale down the nodes. You should also get out of the load testing application by pressingCtrl+Dor typingexit.
- Add some load to our Sample application to trigger the HPA and CA. For that we will use and additional container to generate load.
-
Migrate from CA to Karpenter
- Install Karpenter. For that, we will need to run the following commands:
export CLUSTER_NAME=$(terraform output -raw eks_cluster_name) export CLUSTER_ENDPOINT=$(terraform output -raw eks_cluster_endpoint) export KARPENTER_IAM_ROLE_ARN=$(terraform output -raw karpenter_role_arn) export KARPENTER_SERVICE_ACCOUNT_NAME=$(terraform output -raw karpenter_service_account_name) export KARPENTER_NAMESPACE=$(terraform output -raw karpenter_namespace) export KARPENTER_INTERUPTION_QUEUE=$(terraform output -raw karpenter_interuption_queue_name) export KARPENTER_VERSION="1.3.3" helm install karpenter oci://public.ecr.aws/karpenter/karpenter --version ${KARPENTER_VERSION} \ --namespace ${KARPENTER_NAMESPACE} --create-namespace \ --set serviceAccount.name=${KARPENTER_SERVICE_ACCOUNT_NAME} \ --set serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn=${KARPENTER_IAM_ROLE_ARN} \ --set settings.clusterName=${CLUSTER_NAME} \ --set settings.clusterEndpoint=${CLUSTER_ENDPOINT} \ --set settings.interruptionQueue=${KARPENTER_INTERUPTION_QUEUE} \ --set settings.featureGates.spotToSpotConsolidation=true \ --set controller.resources.requests.cpu=100m \ --set controller.resources.requests.memory=128Mi \ --set controller.resources.limits.cpu=500m \ --set controller.resources.limits.memory=500Mi \ --set replicas=1 \ --wait
- Remove Cluster Autoscaler from our EKS cluster. Uninstall CA by running the following command:
Then, you can remove the IAM role created by terraform by opening
kubectl delete -f autoscaling/cluster-autoscaler.yml
terraform.tfvarsand setting the following variable:and running terraform again:enable_eks_cluster_autoscaler = false
terraform apply
- Create Karpenter NodePool
kubectl apply -f autoscaling/karpenter.yml
- Now you can start testing Karpenter autoscaling. You can use the same load generator as before to test Karpenter autoscaling.
- You can see the Karpenter logs by running the following command:
kubectl logs -f -n karpenter deployment/karpenter
- You can also check the EKS console to see the new nodes being added to the cluster or in the Kube Ops View dashboard.
- To stop the load generator, you can press
Ctrl+C. This will stop the load generation and allow the HPA to scale down the pods and Karpenter to scale down the nodes. You should also get out of the load testing application by pressingCtrl+Dor typingexit.
- Install Karpenter. For that, we will need to run the following commands:
-
Cleanup
- To clean up the resources created during the workshop, you can run the following command:
terraform destroy
- You will need to manually remove the S3 bucket created to store terraform stack.
aws s3api delete-bucket --bucket <BUCKET_NAME> --region <REGION> --profile cndro2025
- To clean up the resources created during the workshop, you can run the following command:
| Name | Version |
|---|---|
| terraform | >= 1.10.0 |
| aws | ~> 5.83.0 |
| http | 3.4.5 |
| Name | Version |
|---|---|
| aws | 5.83.1 |
| http | 3.4.5 |
| null | 3.2.3 |
| Name | Source | Version |
|---|---|---|
| eks | terraform-aws-modules/eks/aws | ~> 20.31 |
| karpenter | terraform-aws-modules/eks/aws//modules/karpenter | n/a |
| vpc | terraform-aws-modules/vpc/aws | 5.19.0 |
| Name | Type |
|---|---|
| aws_budgets_budget.cost | resource |
| aws_iam_policy.eks_cluster_autoscaler | resource |
| aws_iam_role.eks_cluster_autoscaler | resource |
| aws_iam_role_policy_attachment.eks_cluster_autoscaler | resource |
| null_resource.generate_kubeconfig | resource |
| http_http.myip | data source |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| availability_zones | List of availability zones to use for the VPC | list(string) |
n/a | yes |
| eks_cluster_name | Name of the EKS cluster | string |
"cndro-eks" |
no |
| eks_cluster_version | Version of the EKS cluster | string |
"1.31" |
no |
| email_address | Please enter your valid email address Email address will be used to receive budget notifications |
string |
n/a | yes |
| enable_budget | Enable budget notifications | bool |
true |
no |
| enable_eks_cluster_autoscaler | Create EKS Cluster Autoscaler role and policy | bool |
true |
no |
| enable_karpenter | Create Karpenter role and policy | bool |
false |
no |
| karpenter_namespace | Karpenter namespace | string |
"karpenter" |
no |
| karpenter_service_account | Karpenter service account | string |
"karpenter" |
no |
| karpenter_use_spot_instances | Use spot instances in Karpenter | bool |
false |
no |
| public_subnets | List of public subnets to create in the VPC | list(string) |
n/a | yes |
| region | AWS region to deploy resources in | string |
"eu-central-1" |
no |
| Name | Description |
|---|---|
| aws_region | AWS region where the resources are deployed |
| eks_cluster_autoscaler_role_arn | EKS Cluster Autoscaler Role ARN |
| eks_cluster_endpoint | EKS Endpoint for EKS control plane |
| eks_cluster_name | EKS Cluster Name |
| eks_kubeconfig_command | Command to configure kubectl to use the EKS cluster |
| karpenter_interuption_queue_name | Karpenter Interruption Queue Name |
| karpenter_namespace | Karpenter Namespace |
| karpenter_role_arn | Karpenter Role ARN |
| karpenter_service_account_name | Karpenter Service Account Name |
| my_ip_address | My public IP address |