Skip to content

worldcoin/terraform-aws-eks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

912 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

terraform-aws-eks

Description

This module is used to create an EKS cluster on AWS with the support of the vpc and sso-roles modules.

How to release

New releases are created automagically by Release Drafter GH action.

Type of release bump is made of commits (tags feat/bugfix/etc...).

Release is created as draft, so you have to edit it manually and change it to final.

Breaking changes

Version 10.0.0 - Kubernetes Resource Type Migration

Version 10.0.0 introduces a breaking change due to upgrading deprecated kubernetes_* resources to their kubernetes_*_v1 counterparts. This affects the following resources:

  • kubernetes_namespacekubernetes_namespace_v1
  • kubernetes_servicekubernetes_service_v1
  • kubernetes_secretkubernetes_secret_v1
  • kubernetes_storage_classkubernetes_storage_class_v1

Why moved blocks don't work:

Terraform's moved block cannot be used for this migration because the Kubernetes provider does not support moving resource state across different resource types:

Error: Move Resource State Not Supported

The "kubernetes_storage_class_v1" resource type does not support moving resource state across resource types.

Suggested Solution:

Add the following removed and import blocks to your root module (e.g., state.tf) to migrate resources without destroying them:

# Remove old resource from state without destroying it
removed {
  from = module.eks_security.kubernetes_storage_class.gp3

  lifecycle {
    destroy = false
  }
}
# Import existing storage class into new resource type
import {
  to = module.eks_security.kubernetes_storage_class_v1.gp3[0]
  id = "gp3"
}

After successful migration (terraform apply), you can remove the removed and import blocks from your configuration.

Version 4.0 - Authentication Mode Change

Version 4.0 introduces an authentication mode change from CONFIG_MAP to API_AND_CONFIG_MAP. This change requires manual intervention to update the clusters. The following steps should be taken to update the clusters:

aws eks update-cluster-config --name CLUSTER_NAME --access-config authenticationMode=API_AND_CONFIG_MAP --region AWS_REGION

This will change the authentication mode to API_AND_CONFIG_MAP, and the next terraform plan/apply will work as expected.

Supported versions

The module is currently supporting the following versions of Kubernetes:

  • 1.32,
  • 1.33,

Note

Default version for EKS Cluster is 1.32.

Examples

Minimal

A minimal example of how to use this module.

module "eks" {
    source       = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
    cluster_name = local.cluster_name
    region       = var.region

    vpc_config = module.vpc.config

    extra_role_mapping = module.sso_roles.default_mappings

    datadog_api_key     = var.datadog_api_key
    traefik_cert_arn    = var.traefik_cert_arn
    alb_logs_bucket_id  = module.region.alb_logs_bucket_id
}

Internal Load Balancer

Example of Internal load balancer setup

module "eks" {
    source       = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
    cluster_name = local.cluster_name
    region       = var.region

    vpc_config = module.vpc.config

    extra_role_mapping = module.sso_roles.default_mappings

    datadog_api_key     = var.datadog_api_key
    traefik_cert_arn    = var.traefik_cert_arn
    alb_logs_bucket_id  = module.region.alb_logs_bucket_id

    internal_nlb_enabled = true
    internal_nlb_acm_arn = module.acm.cert_arn
}

Static AutoScalingGroup

Example off using Static Auto Scaling Group

module "eks" {
  source       = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
  cluster_name = local.cluster_name
  region       = var.region

  vpc_config           = module.vpc.config
  extra_role_mapping   = module.sso_roles.default_mappings
  environment          = var.environment
  traefik_cert_arn     = module.acm_v3.cert_arn
  datadog_api_key      = var.datadog_api_key
  alb_logs_bucket_id   = module.region.alb_logs_bucket_id
  monitoring_enabled   = false
  internal_nlb_enabled = true

  static_autoscaling_group = {
    size = 8
    arch = "arm64"
    type = "m7g.16xlarge"
  }
}

Private SubNets

Example of using private subnets for internal NLB

module "eks" {
  source       = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
  cluster_name = local.cluster_name
  region       = var.region

  vpc_config                           = module.vpc.config
  extra_role_mapping                   = module.sso_roles.default_mappings
  environment                          = var.environment
  traefik_cert_arn                     = module.acm_v3.cert_arn
  datadog_api_key                      = var.datadog_api_key
  alb_logs_bucket_id                   = module.region.alb_logs_bucket_id
  monitoring_enabled                   = false
  internal_nlb_enabled                 = true
  use_private_subnets_for_internal_nlb = true
}

Additional Security Group Rules

Example of using additional_security_group_rules to add rules to the node security group and additional_cluster_security_group_rules for the cluster security group.

module "eks" {
  source       = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
  cluster_name = local.cluster_name
  region       = var.region

  environment        = var.environment
  vpc_config         = module.vpc.config
  extra_role_mapping = module.sso_roles.default_mappings

  traefik_cert_arn     = module.acm.cert_arn
  internal_nlb_enabled = true

  datadog_api_key    = var.datadog_api_key
  alb_logs_bucket_id = module.region.alb_logs_bucket_id

  monitoring_notification_channel = "@slack-TFH-infrastructure-alerts-stage"

  # Add rules to the NODE security group
  additional_security_group_rules = [
    {
      type        = "ingress"
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      description = "Allow all ingress traffic from specific CIDR blocks"
      cidr_blocks = ["10.100.0.0/16", "192.168.0.0/24"]
    },
    {
      type                          = "ingress"
      from_port                     = 0
      to_port                       = 0
      protocol                      = "-1"
      description                   = "Allow all ingress traffic from the cluster security group"
      source_cluster_security_group = true
    }
  ]

  # Add rules to the CLUSTER security group
  additional_cluster_security_group_rules = [
    {
      type                       = "ingress"
      from_port                  = 0
      to_port                    = 0
      protocol                   = "-1"
      description                = "Allow all ingress traffic from the node security group"
      source_node_security_group = true
    }
  ]
}

Associate access policies with access entries

The access_entries input allows you to associate access policies with access entries. The access_entries input is a map where the key is the name of the access entry and the value is a map with the following keys:

module "eks" {
  source       = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
  cluster_name = local.cluster_name
  region       = var.region

  vpc_config = module.vpc.config

  extra_role_mapping = module.sso_roles.default_mappings

  datadog_api_key     = var.datadog_api_key
  traefik_cert_arn    = var.traefik_cert_arn
  alb_logs_bucket_id  = module.region.alb_logs_bucket_id

  access_entries = {
    # example with cluster access with default AmazonEKSAdminPolicy
    applicationA = {
      principal_arn     = "arn:aws:iam::507152310572:role/github-deployment-applicationA"
      access_scope_type = "cluster"
    }
    # example with namespace access
    applicationB = {
      principal_arn           = "arn:aws:iam::507152310572:role/github-deployment-applicationB"
      access_scope_namespaces = ["applicationB"]
    }
    # example with policy AmazonEKSClusterAdminPolicy access
    applicationC = {
      principal_arn           = "arn:aws:iam::507152310572:role/github-deployment-applicationC"
      access_scope_type       = "cluster"
      policy_arn              = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
    }
  }
}

Once the policy_arn is not specified, the default AmazonEKSAdminPolicy is used.

AWS EKS Cluster Policies

Most common used AWS policies for EKS clusters:

  • AmazonEKSClusterAdminPolicy: This policy grants administrator access to a cluster and is equivalent to the RBAC cluster-admin role with star permissions on everything.
  • AmazonEKSAdminPolicy: This policy is equivalent to the RBAC admin role. It provides broad permissions to resources, typically scoped to a specific namespace. It is somewhat restricted when it comes to modifying namespace configurations or affecting other namespaces. This policy is designed to support namespace-based multi-tenancy. If you want an IAM principal to have a more limited administrative scope, consider using AmazonEKSAdminPolicy instead of AmazonEKSClusterAdminPolicy.
  • AmazonEKSEditPolicy: This policy grants access to edit most Kubernetes resources, usually within a specific namespace. It allows reading secrets and editing resources, but it should not serve as a security boundary, as there are several possible privilege escalation paths to AmazonEKSClusterAdminPolicy.
  • AmazonEKSViewPolicy: Grants access to list and view most Kubernetes resources, typically within a namespace. This policy is read-only and does not allow modification of resources. It is useful for monitoring and auditing purposes.

In summary, AmazonEKSClusterAdminPolicy provides the highest level of access, while AmazonEKSAdminPolicy and AmazonEKSEditPolicy offer more restricted, namespace-scoped permissions.

If you need specyfic access to the cluster, you can list the available AWS Polivies via aws cli:

aws eks list-access-policies --output table --region us-east-1

Datadog

The module is creating a DataDog integration secret for the apiKeyExistingSecret of the DataDog helm chart.

Nitro Enclaves

Example with basic enclave support:

module "eks" {
  source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.11.0"
  
  # ... other configuration ...
  
  # Enable basic enclave support (legacy)
  enclaves               = true
  enclaves_instance_type = "m7a.4xlarge"
  enclaves_autoscaling_group = {
    size     = 2
    min_size = 1
    max_size = 4
  }
  enclaves_cpu_allocation    = "8"
  enclaves_memory_allocation = "8192"
}

Enclave Tracks (Multi-Version Support)

Example with enclave tracks for running multiple versions simultaneously:

module "eks" {
  source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.11.0"
  
  # ... other configuration ...
  
  # Multiple enclave tracks for different versions
  enclave_tracks = {
    v1 = {
      autoscaling_group = {
        size     = 3  # Spreads across AZs
        min_size = 3
        max_size = 9
      }
      instance_type     = "m7a.4xlarge"
      cpu_allocation    = "8"
      memory_allocation = "8192"
    }
    
    v2 = {
      autoscaling_group = {
        size     = 1
        min_size = 0
        max_size = 3
      }
      instance_type = "m7a.2xlarge"
      # Uses default cpu/memory allocation if not specified
    }
  }
}

Each track creates:

  • Dedicated ASG with nodes spread across availability zones
  • Node labels: enclave.tools/track=<track_name>
  • Node taints: enclave.tools/track=<track_name>:NoSchedule

Deploy workloads to specific tracks using nodeSelector:

nodeSelector:
  aws-nitro-enclaves-k8s-dp: enabled
  enclave.tools/track: "stable"

tolerations:
  - key: "enclave"
    operator: "Exists"
    effect: "NoExecute"

For detailed enclave tracks documentation, see ENCLAVE_TRACKS.md.

Monitoring

Monitoring the cluster using Datadog is also included, enabled by default, by using terraform-datadog-kubernetes.

Amazon EFS CSI driver

The module comes with the IAM role for Amazon EFS CSI driver, and can be enabled using efs_csi_driver_enabled variable. Also with the role, it will create an instance of Elastic File System (EFS) and mount it to the cluster as a StorageClass named efs.

Terraform tips and tricks

  1. Module from begining, has defined kubernetes provider inside on it, configured based on information from terraform resource aws_eks_cluster to authenticate to the eks cluster. With this constrain only create operation work properly, other operation update, remove doesn't work.

  2. With version v4.2.0 we have change for kubernetes provider. It's configured based on informatiom from data source about aws_eks_cluster, and if provider can't be configure with this way terraform resource aws_eks_cluster is used. PR with: fix kubernetes provider. With this change create and update operation work perfectly, remove operation still doesn't work.

  3. In the feature versions of terraform-aws-eks module, remove operation can be fixed to work properly. For this kubernetes provider must be moved from module to workspace. It can be tested with PRs:

Cluster create

Works like a charm for any case, from begining.

Cluster update

Note

Starting with version 7.6.0, specifying the AWS region via the region input variable is required. Users must provide a valid region string (lowercase letters and digits, optionally separated by single hyphens), for example us-east-1.

General steps to update EKS cluster:

locals {
  # https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html
  # aws eks describe-addon-versions --addon-name eks-pod-identity-agent | jq '.addons[0].addonVersions[0]'
  eks_pod_identity_agent_version = {
    "1.29" = "v1.3.5-eksbuild.2"
    "1.30" = "v1.3.5-eksbuild.2"
    "1.31" = "v1.3.5-eksbuild.2"
    "1.32" = "v1.3.5-eksbuild.2"
  }
}
  • upgrade addons for EKS clusters, by bump version of module teraform-aws-eks to the latest release
diff --git a/internal-tools/dev/us-east-1/eks.tf b/internal-tools/dev/us-east-1/eks.tf
index a95261645..9cf8d04b9 100644
--- a/internal-tools/dev/us-east-1/eks.tf
+++ b/internal-tools/dev/us-east-1/eks.tf
@@ -14,7 +14,7 @@ module "acm" {
 }

 module "eks" {
-  source = "git@github.com:worldcoin/terraform-aws-eks?ref=v4.4.2"
+  source = "git@github.com:worldcoin/terraform-aws-eks?ref=v4.5.0"

   cluster_name       = format("tools-%s-%s", var.environment, data.aws_region.current.region)
   cluster_version    = "1.32"
  • upgrade EKS control plane, by bump version in variable cluster_version for each cluster
--- a/internal-tools/dev/us-east-1/eks.tf
+++ b/internal-tools/dev/us-east-1/eks.tf
@@ -17,7 +17,7 @@ module "eks" {
   source = "git@github.com:worldcoin/terraform-aws-eks?ref=v4.5.0"

   cluster_name       = format("tools-%s-%s", var.environment, data.aws_region.current.region)
-  cluster_version    = "1.31"
+  cluster_version    = "1.32"
   environment        = var.environment
   vpc_config         = module.vpc.config
   extra_role_mapping = module.sso_roles.default_mappings

Note

The control plane can only be updated by +1 version!!!

Note

Repeat this step many times to get right EKS cluster version

  • observe node group rotation, after upgrade control-plane/launch-template/AMIs not always it is done automatically. From time to time manual operation is required here to kill pods with pdb/annotations.

Note

Kubelet compatibility is +3 versions, and node group rotation is not always required and can be done once in the end

Note

Please be carefull with EKS crypto nodes rotation, and don't do this without information on slack channel #planned-outages Alternative approche is to leave them with status Read/SchedulingDisabled and wait when they rotate with apps deployment

  • schedule start-instance-refresh for node group used to keep infrastructure pods
aws autoscaling start-instance-refresh --auto-scaling-group-name eks-node-tools-dev-us-east-1 --region us-east-1 --profile wld-internal-tools-dev --output json
  • observe describe-instance-refreshes for node group used to keep infrastructure pods
aws autoscaling describe-instance-refreshes --auto-scaling-group-name eks-node-tools-dev-us-east-1 --region us-east-1 --profile wld-internal-tools-dev --output json

Before version 4.2.0

Manual upgrade is required with below command, and terraform apply after execution.

aws eks update-cluster-version --region ... --name ... --kubernetes-version 1.29

After version 4.2.0

Works like a charm without of any manual operation. Just plan/apply workspace with TFE.

Cluster remove

To remove the cluster you have to:

  1. Delete ALL traefik SVCs and ingresses, example (keep in mind there could be more/less traefiks than in this example):

    kubectl -n traefik delete svc traefik-alb --wait=false
    kubectl -n traefik patch svc traefik-alb -p '{"metadata":{"finalizers":null}}' --type=merge
    
    kubectl -n traefik-internal delete svc traefik-internal --wait=false
    kubectl -n traefik-internal patch svc traefik-internal -p '{"metadata":{"finalizers":null}}' --type=merge
    
    kubectl -n traefik delete ingress traefik-alb --wait=false
    kubectl -n traefik patch ingress traefik-alb -p '{"metadata":{"finalizers":null}}' --type=merge
  2. Set these flags, the module will remove every usage of the Kubernetes provider and allow you to remove the cluster module without any errors.

    efs_csi_driver_enabled = false
    kubernetes_provider_enabled = false
  3. If above PR apply fails (possible reason: race condition - aws_auth removed too soon), remove all kubernetes_* resources from state:

    terraform state list |grep kubernetes_
    
    terraform state rm ...
  4. Manually remove LB deletion protection from AWS (both external and internal) before final delete

  5. Remove module invocation to finally delete cluster itself.

  6. If above PR apply fails on deleting autoscalinggroups, terminate leftover instances and rerun apply (possible reason: race condition - karpenter didn't have enough time to clean instances)

Requirements

Name Version
terraform >= 1.9.0
aws >= 5.5
cloudflare >= 4.10
datadog >= 3.0
kubernetes >= 2.0
random >= 3.3
tls >= 4.0

Providers

Name Version
aws >= 5.5
cloudflare >= 4.10
datadog >= 3.0
kubernetes >= 2.0
random >= 3.3
tls >= 4.0

Modules

Name Source Version
alb git@github.com:worldcoin/terraform-aws-alb.git v0.19.0
datadog_monitoring git@github.com:worldcoin/terraform-datadog-kubernetes v1.2.2
nlb git@github.com:worldcoin/terraform-aws-nlb.git v1.1.1

Resources

Name Type
aws_autoscaling_group.enclave resource
aws_autoscaling_group.this resource
aws_cloudwatch_event_rule.spot_aws_ec2 resource
aws_cloudwatch_event_rule.spot_aws_health resource
aws_cloudwatch_event_target.spot_aws_ec2 resource
aws_cloudwatch_event_target.spot_aws_health resource
aws_cloudwatch_log_group.this resource
aws_efs_file_system.persistent_volume resource
aws_efs_mount_target.persistent_volume resource
aws_eks_access_entry.this resource
aws_eks_access_policy_association.this resource
aws_eks_addon.coredns resource
aws_eks_addon.ebs_csi resource
aws_eks_addon.eks_node_monitoring_agent resource
aws_eks_addon.eks_pod_identity_agent resource
aws_eks_addon.kube_proxy resource
aws_eks_addon.mountpoint_s3_csi resource
aws_eks_addon.snapshot_controller resource
aws_eks_addon.vpc_cni resource
aws_eks_cluster.this resource
aws_eks_node_group.al2023 resource
aws_eks_pod_identity_association.ebs_csi_controller resource
aws_eks_pod_identity_association.this resource
aws_iam_instance_profile.node resource
aws_iam_openid_connect_provider.oidc_provider resource
aws_iam_role.aws_efs_csi_driver resource
aws_iam_role.aws_load_balancer resource
aws_iam_role.aws_s3_mountpoint_csi resource
aws_iam_role.cluster resource
aws_iam_role.ebs_csi_controller resource
aws_iam_role.karpenter resource
aws_iam_role.kube_ops resource
aws_iam_role.node resource
aws_iam_role_policy.aws_efs_csi_driver resource
aws_iam_role_policy.aws_load_balancer resource
aws_iam_role_policy.aws_s3_mountpoint_csi resource
aws_iam_role_policy.dockerhub_pull_through_cache resource
aws_iam_role_policy.karpenter resource
aws_iam_role_policy.kube_ops resource
aws_iam_role_policy.node_inline_policy resource
aws_iam_role_policy_attachment.cluster resource
aws_iam_role_policy_attachment.ebs_csi_controller resource
aws_iam_role_policy_attachment.node resource
aws_kms_key.this resource
aws_launch_template.al2023 resource
aws_launch_template.enclave resource
aws_launch_template.this resource
aws_secretsmanager_secret.this resource
aws_secretsmanager_secret_version.this resource
aws_security_group.cluster resource
aws_security_group.node resource
aws_security_group.persistent_volume resource
aws_security_group_rule.additional_cluster_security_group_rules resource
aws_security_group_rule.additional_rule resource
aws_security_group_rule.cluster_egress resource
aws_security_group_rule.cluster_from_node_ingress resource
aws_security_group_rule.node_allow_vpc_dns_tcp resource
aws_security_group_rule.node_allow_vpc_dns_udp resource
aws_security_group_rule.node_egress resource
aws_security_group_rule.node_from_alb_ingress resource
aws_security_group_rule.node_from_cluster_ingress resource
aws_security_group_rule.node_to_node_ingress resource
aws_security_group_rule.persistent_volume_from_node_ingress resource
aws_security_group_rule.tfe_and_gha_cluster_ingress resource
aws_security_group_rule.traefik_from_alb_metrics resource
aws_security_group_rule.traefik_from_alb_traffic resource
aws_sqs_queue.this resource
aws_sqs_queue_policy.spot_notifications_sqs resource
cloudflare_record.monitoring resource
datadog_monitor.oom resource
datadog_synthetics_test.cluster_monitoring resource
kubernetes_config_map.aws_auth resource
kubernetes_ingress_v1.treafik_ingress resource
kubernetes_namespace_v1.traefik resource
kubernetes_secret_v1.datadog resource
kubernetes_service_v1.traefik_alb resource
kubernetes_service_v1.traefik_nlb resource
kubernetes_storage_class_v1.efs resource
kubernetes_storage_class_v1.gp3 resource
random_password.dd_clusteragent_token resource
aws_caller_identity.account data source
aws_eks_cluster.this data source
aws_eks_cluster_auth.default data source
aws_eks_cluster_auth.this data source
aws_eks_clusters.this data source
aws_iam_policy_document.assume_role data source
aws_iam_policy_document.aws_efs_csi_driver data source
aws_iam_policy_document.aws_efs_csi_driver_assume_role_policy data source
aws_iam_policy_document.aws_load_balancer data source
aws_iam_policy_document.aws_load_balancer_assume_role_policy data source
aws_iam_policy_document.aws_s3_mountpoint_csi_s3_access data source
aws_iam_policy_document.aws_s3_mountpoint_csi_s3_assume data source
aws_iam_policy_document.cluster_assume_role_policy data source
aws_iam_policy_document.dockerhub_pull_through_cache data source
aws_iam_policy_document.eks_pod_identity_assume_role data source
aws_iam_policy_document.karpenter data source
aws_iam_policy_document.karpenter_assume_role_policy data source
aws_iam_policy_document.kms data source
aws_iam_policy_document.kube_ops data source
aws_iam_policy_document.node_assume_role_policy data source
aws_iam_policy_document.spot_notification_sqs_policy data source
aws_region.current data source
aws_ssm_parameter.al2023_ami data source
aws_vpc.cluster_vpc data source
cloudflare_zone.worldcoin_dev data source
datadog_synthetics_locations.locations data source
tls_certificate.this data source

Inputs

Name Description Type Default Required
access_entries Map of access entries to add to the cluster
map(object({
principal_arn = string
kubernetes_groups = optional(list(string), null)
type = optional(string, "STANDARD")
tags = optional(map(string), {})
access_scope_type = optional(string, "namespace")
access_scope_namespaces = optional(list(string), [])
policy_arn = optional(string, "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy")
}))
{} no
acm_extra_arns ARNs of ACM certificates used for TLS, attached as additional certificates to the ALB list(string) [] no
additional_cluster_security_group_rules Additional cluster security group rules
list(object({
type = string
from_port = number
to_port = number
protocol = string

description = optional(string)
cidr_blocks = optional(list(string))
ipv6_cidr_blocks = optional(list(string))
prefix_list_ids = optional(list(string))
self = optional(bool)
source_node_security_group = optional(bool, false)
sg_id = optional(string)
}))
[] no
additional_open_ports Additional ports accessible from the Internet for the ALB
set(object({
port = number
protocol = optional(string, "tcp")
}))
[] no
additional_security_group_rules Additional security group rules
list(object({
type = string
from_port = number
to_port = number
protocol = string

description = optional(string)
cidr_blocks = optional(list(string))
ipv6_cidr_blocks = optional(list(string))
prefix_list_ids = optional(list(string))
self = optional(bool)
source_cluster_security_group = optional(bool, false)
sg_id = optional(string)
}))
[] no
alb_additional_node_ports List of node ports which are accessible by ALB list(number) [] no
alb_idle_timeout The time in seconds that the connection is allowed to be idle number 60 no
alb_logs_bucket_id The ID of the S3 bucket to store logs in for ALB. string n/a yes
argocd_role_arn The ARN of the remote ArgoCD role used to assume eks-cluster role string null no
authentication_mode The authentication mode for the cluster. Valid values are CONFIG_MAP, API or API_AND_CONFIG_MAP string "API_AND_CONFIG_MAP" no
aws_autoscaling_group_enabled Whether to enable AWS Autoscaling group bool true no
aws_load_balancer_iam_role_enabled Whether to enable the IAM role for the AWS Load Balancer bool true no
cluster_endpoint_public_access Indicates whether or not the Amazon EKS public API server endpoint is enabled bool false no
cluster_name The name of the cluster. Has to be unique per region per account. string n/a yes
cluster_version The Kubernetes version to use for the cluster. string "1.32" no
coredns_max_replicas Maximum number of replicas for CoreDNS number 10 no
coredns_min_replicas Minimum number of replicas for CoreDNS number 2 no
datadog_api_key Datadog API key. Stored in kube-system namespace as a secret. string n/a yes
deploy_desired_vs_status_critical Threshold for critical for Desired pods vs current pods (Deployments) number 10 no
deploy_desired_vs_status_evaluation_period Evaluation period for Desired pods vs current pods (Deployments) string "last_15m" no
deploy_desired_vs_status_warning Threshold for warning for Desired pods vs current pods (Deployments) number 1 no
dockerhub_pull_through_cache_repositories_arn The ARN of the repositories to allow the EKS node group to pull images from the DockerHub pull-through cache. string "arn:aws:ecr:us-east-1:507152310572:repository/docker-cache/*" no
drop_invalid_header_fields Drop invalid header fields bool false no
efs_csi_driver_enabled Whether to enable the EFS CSI driver (IAM Role & StorageClass). bool false no
eks_node_group Configuration for EKS node group
object({
arch = string
types = list(string)
disk = optional(number, 100)
dns = optional(string, "172.20.0.10")
})
null no
enclaves Enabling Nitro Enclaves for the cluster bool false no
enclaves_autoscaling_group Configuration for Nitro Enclaves autoscaling group
object({
size = optional(number, 1)
min_size = optional(number, 0)
max_size = optional(number, 10)
})
{} no
enclaves_cpu_allocation Number of CPUs to allocate for Nitro Enclaves per node string "4" no
enclaves_instance_type Instance type for Nitro Enclaves string "m7a.2xlarge" no
enclaves_memory_allocation Memory in MiB to allocate for Nitro Enclaves per node string "4096" no
enclave_tracks Additional enclave tracks for multi-version deployments. Key is used as track identifier.
map(object({
autoscaling_group = optional(object({
size = optional(number, 1)
min_size = optional(number, 0)
max_size = optional(number, 10)
}), {})
instance_type = optional(string)
cpu_allocation = optional(string)
memory_allocation = optional(string)
}))
{} no
environment Environment of cluster string n/a yes
external_alb_enabled Internal Network load balancers to create. If true, the NLB will be created. bool true no
external_check_locations List of DD locations to check cluster availability from list(string)
[
"aws:af-south-1",
"aws:ap-south-1",
"aws:ap-southeast-1",
"aws:eu-central-1",
"aws:sa-east-1",
"aws:us-east-1"
]
no
external_tls_listener_version The version of the TLS listener to use for external ALB. string "1.3" no
extra_nlb_listeners List with configuration for additional listeners
list(object({
name = string
port = string
protocol = optional(string, "TCP")
target_group_port = number
}))
[] no
extra_role_mapping Extra role mappings to add to the aws-auth configmap.
list(object({
rolearn = string
username = string
groups = list(string)
}))
[] no
gha_cidr GitHub Actions CIDR block string "10.0.96.0/20" no
http_put_response_hop_limit The maximum number of hops allowed for HTTP PUT requests. Must be between 1 and 64. number 2 no
internal_nlb_acm_arn The ARN of the certificate to use for internal NLB. string "" no
internal_nlb_enabled Internal Network load balancers to create. If true, the NLB will be created. bool true no
internal_tls_listener_version The version of the TLS listener to use for internal NLB. string "1.3" no
kube_ops_enabled Whether to create a role and association for kube-ops bool true no
kubelet_extra_args kubelet extra args to pass to the node group string "--register-with-taints=critical:NoExecute" no
kubernetes_provider_enabled Whether to create a Kubernetes provider for the cluster. Use as a prerequisite to cluster removal. bool true no
memory_limits_low_perc_enabled Enable memory limits low percentage alert bool false no
monitoring_enabled Whether to enable monitoring (Datadog). bool true no
monitoring_notification_channel The Datadog notification channel to use for monitoring alerts. string "@slack-TFH-infrastructure-alerts" no
monitoring_reachability_fail_locations Number of locations to fail to trigger the reachability test number 5 no
monitoring_reachability_failure_duration Time after first error when the reachability test is triggered number 300 no
node_instance_profile_inline_policies Inline policies to attach to the node instance profile. Key is the name of the policy, value is the policy document. map(string) {} no
node_monitoring_agent_enabled Enable node monitoring agent bool false no
on_demand_base_capacity The number of minimum on-demand instances to launch. number 1 no
open_to_all Set to true if you want to open the cluster to all traffic from internet bool false no
region AWS Region string n/a yes
s3_mountpoint_csi_driver_enabled Whether to enable the S3 mountpoint CSI driver bool false no
s3_mountpoint_csi_s3_bucket_arns List of S3 bucket ARNs to allow access from the S3 mountpoint CSI driver list(string)
[
"*"
]
no
static_autoscaling_group Configuration for static autoscaling group
object({
size = number
arch = optional(string, null)
type = string
})
null no
storage_class Configuration for the storage class that defines how volumes are allocated in Kubernetes.
object({
volume_binding_mode = optional(string, "WaitForFirstConsumer")
allow_volume_expansion = optional(bool, true)
})
{
"allow_volume_expansion": true,
"volume_binding_mode": "WaitForFirstConsumer"
}
no
tfe_cidr Terraform Enterprise CIDR block string "10.52.160.0/20" no
traefik_cert_arn The ARN of the certificate to use for Traefik. string null no
traefik_nlb_service_ports List of additional ports for treafik k8s service
list(object({
name = string
port = number
target_port = string
protocol = string
}))
[] no
use_private_subnets_for_internal_nlb Set to true if you want to use private subnets for internal NLB bool false no
vpc_cni_enable_pod_eni Enable pod ENI support bool true no
vpc_cni_enable_prefix_delegation Enable prefix delegation for IPv6, allocate IPs in /28 blocks (instead of all at once) bool false no
vpc_cni_external_snat Needed to enable cross-vpc pod-to-pod communication - see: https://github.com/aws/amazon-vpc-cni-k8s?tab=readme-ov-file#aws_vpc_k8s_cni_externalsnat string false no
vpc_cni_pod_security_group_enforcing_mode Set pod security group enforcing mode string "standard" no
vpc_cni_version_override The version of the VPC CNI plugin to use. If not specified, the default version for the cluster version will be used. string "" no
vpc_cni_warm_eni_target Number of ENIs to keep warm for each node to speed up pod scheduling string "1" no
vpc_cni_warm_ip_target Number of IPs to keep warm for each node to speed up pod scheduling string "8" no
vpc_config VPC configuration from aws/vps module
object({
vpc_id = string
private_subnets = list(string)
public_subnets = list(string)
})
n/a yes
wafv2_arn The ARN of the WAFv2 WebACL to associate with the ALB string "" no

Outputs

Name Description
alb_arn An ARN of the main ALB (traefik)
alb_arns Map of ARNs of the ALBs
alb_dns_name A dns name of the main ALB (traefik)
alb_dns_names Map of dns names of the ALBs
cluster_certificate_authority_data Base64 encoded certificate data required to communicate with the cluster
cluster_endpoint Endpoint for your Kubernetes API server
cluster_oidc_issuer_url The OIDC issuer URL for the EKS cluster
name The name of the cluster
nlb_arns Map of ARNs of the NLBs
nlb_dns_names Map of dns names of the NLBs
nlb_zone_ids Map of zone IDs of the NLBs
node_security_group_id The security group ID of the EKS nodes

About

Terraform module for creating an EKS cluster

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors