- Description
- How to release
- Breaking changes
- Supported versions
- Examples
- Datadog
- Amazon EFS CSI driver
- Terraform tips and tricks
- Requirements
- Providers
- Modules
- Resources
- Inputs
- Outputs
This module is used to create an EKS cluster on AWS with the support of the vpc
and sso-roles modules.
New releases are created automagically by Release Drafter GH action.
Type of release bump is made of commits (tags feat/bugfix/etc...).
Release is created as draft, so you have to edit it manually and change it to final.
Version 10.0.0 introduces a breaking change due to upgrading deprecated kubernetes_* resources to their kubernetes_*_v1 counterparts. This affects the following resources:
kubernetes_namespace→kubernetes_namespace_v1kubernetes_service→kubernetes_service_v1kubernetes_secret→kubernetes_secret_v1kubernetes_storage_class→kubernetes_storage_class_v1
Why moved blocks don't work:
Terraform's moved block cannot be used for this migration because the Kubernetes provider does not support moving resource state across different resource types:
Error: Move Resource State Not Supported
The "kubernetes_storage_class_v1" resource type does not support moving resource state across resource types.
Suggested Solution:
Add the following removed and import blocks to your root module (e.g., state.tf) to migrate resources without destroying them:
# Remove old resource from state without destroying it
removed {
from = module.eks_security.kubernetes_storage_class.gp3
lifecycle {
destroy = false
}
}
# Import existing storage class into new resource type
import {
to = module.eks_security.kubernetes_storage_class_v1.gp3[0]
id = "gp3"
}After successful migration (
terraform apply), you can remove theremovedandimportblocks from your configuration.
Version 4.0 introduces an authentication mode change from CONFIG_MAP to API_AND_CONFIG_MAP. This change requires manual intervention to update the clusters. The following steps should be taken to update the clusters:
aws eks update-cluster-config --name CLUSTER_NAME --access-config authenticationMode=API_AND_CONFIG_MAP --region AWS_REGION
This will change the authentication mode to API_AND_CONFIG_MAP, and the next terraform plan/apply will work as expected.
The module is currently supporting the following versions of Kubernetes:
- 1.32,
- 1.33,
Note
Default version for EKS Cluster is 1.32.
A minimal example of how to use this module.
module "eks" {
source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
cluster_name = local.cluster_name
region = var.region
vpc_config = module.vpc.config
extra_role_mapping = module.sso_roles.default_mappings
datadog_api_key = var.datadog_api_key
traefik_cert_arn = var.traefik_cert_arn
alb_logs_bucket_id = module.region.alb_logs_bucket_id
}Example of Internal load balancer setup
module "eks" {
source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
cluster_name = local.cluster_name
region = var.region
vpc_config = module.vpc.config
extra_role_mapping = module.sso_roles.default_mappings
datadog_api_key = var.datadog_api_key
traefik_cert_arn = var.traefik_cert_arn
alb_logs_bucket_id = module.region.alb_logs_bucket_id
internal_nlb_enabled = true
internal_nlb_acm_arn = module.acm.cert_arn
}Example off using Static Auto Scaling Group
module "eks" {
source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
cluster_name = local.cluster_name
region = var.region
vpc_config = module.vpc.config
extra_role_mapping = module.sso_roles.default_mappings
environment = var.environment
traefik_cert_arn = module.acm_v3.cert_arn
datadog_api_key = var.datadog_api_key
alb_logs_bucket_id = module.region.alb_logs_bucket_id
monitoring_enabled = false
internal_nlb_enabled = true
static_autoscaling_group = {
size = 8
arch = "arm64"
type = "m7g.16xlarge"
}
}Example of using private subnets for internal NLB
module "eks" {
source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
cluster_name = local.cluster_name
region = var.region
vpc_config = module.vpc.config
extra_role_mapping = module.sso_roles.default_mappings
environment = var.environment
traefik_cert_arn = module.acm_v3.cert_arn
datadog_api_key = var.datadog_api_key
alb_logs_bucket_id = module.region.alb_logs_bucket_id
monitoring_enabled = false
internal_nlb_enabled = true
use_private_subnets_for_internal_nlb = true
}Example of using additional_security_group_rules to add rules to the node security group and additional_cluster_security_group_rules for the cluster security group.
module "eks" {
source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
cluster_name = local.cluster_name
region = var.region
environment = var.environment
vpc_config = module.vpc.config
extra_role_mapping = module.sso_roles.default_mappings
traefik_cert_arn = module.acm.cert_arn
internal_nlb_enabled = true
datadog_api_key = var.datadog_api_key
alb_logs_bucket_id = module.region.alb_logs_bucket_id
monitoring_notification_channel = "@slack-TFH-infrastructure-alerts-stage"
# Add rules to the NODE security group
additional_security_group_rules = [
{
type = "ingress"
from_port = 0
to_port = 0
protocol = "-1"
description = "Allow all ingress traffic from specific CIDR blocks"
cidr_blocks = ["10.100.0.0/16", "192.168.0.0/24"]
},
{
type = "ingress"
from_port = 0
to_port = 0
protocol = "-1"
description = "Allow all ingress traffic from the cluster security group"
source_cluster_security_group = true
}
]
# Add rules to the CLUSTER security group
additional_cluster_security_group_rules = [
{
type = "ingress"
from_port = 0
to_port = 0
protocol = "-1"
description = "Allow all ingress traffic from the node security group"
source_node_security_group = true
}
]
}The access_entries input allows you to associate access policies with access entries. The access_entries input is a map where the key is the name of the access entry and the value is a map with the following keys:
module "eks" {
source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.6.0"
cluster_name = local.cluster_name
region = var.region
vpc_config = module.vpc.config
extra_role_mapping = module.sso_roles.default_mappings
datadog_api_key = var.datadog_api_key
traefik_cert_arn = var.traefik_cert_arn
alb_logs_bucket_id = module.region.alb_logs_bucket_id
access_entries = {
# example with cluster access with default AmazonEKSAdminPolicy
applicationA = {
principal_arn = "arn:aws:iam::507152310572:role/github-deployment-applicationA"
access_scope_type = "cluster"
}
# example with namespace access
applicationB = {
principal_arn = "arn:aws:iam::507152310572:role/github-deployment-applicationB"
access_scope_namespaces = ["applicationB"]
}
# example with policy AmazonEKSClusterAdminPolicy access
applicationC = {
principal_arn = "arn:aws:iam::507152310572:role/github-deployment-applicationC"
access_scope_type = "cluster"
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
}
}
}Once the policy_arn is not specified, the default AmazonEKSAdminPolicy is used.
Most common used AWS policies for EKS clusters:
- AmazonEKSClusterAdminPolicy: This policy grants administrator access to a cluster and is equivalent to the RBAC cluster-admin role with star permissions on everything.
- AmazonEKSAdminPolicy: This policy is equivalent to the RBAC admin role. It provides broad permissions to resources, typically scoped to a specific namespace. It is somewhat restricted when it comes to modifying namespace configurations or affecting other namespaces. This policy is designed to support namespace-based multi-tenancy. If you want an IAM principal to have a more limited administrative scope, consider using AmazonEKSAdminPolicy instead of AmazonEKSClusterAdminPolicy.
- AmazonEKSEditPolicy: This policy grants access to edit most Kubernetes resources, usually within a specific namespace. It allows reading secrets and editing resources, but it should not serve as a security boundary, as there are several possible privilege escalation paths to AmazonEKSClusterAdminPolicy.
- AmazonEKSViewPolicy: Grants access to list and view most Kubernetes resources, typically within a namespace. This policy is read-only and does not allow modification of resources. It is useful for monitoring and auditing purposes.
In summary, AmazonEKSClusterAdminPolicy provides the highest level of access, while AmazonEKSAdminPolicy and AmazonEKSEditPolicy offer more restricted, namespace-scoped permissions.
If you need specyfic access to the cluster, you can list the available AWS Polivies via aws cli:
aws eks list-access-policies --output table --region us-east-1
The module is creating a DataDog integration secret for the apiKeyExistingSecret of the DataDog helm chart.
Example with basic enclave support:
module "eks" {
source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.11.0"
# ... other configuration ...
# Enable basic enclave support (legacy)
enclaves = true
enclaves_instance_type = "m7a.4xlarge"
enclaves_autoscaling_group = {
size = 2
min_size = 1
max_size = 4
}
enclaves_cpu_allocation = "8"
enclaves_memory_allocation = "8192"
}Example with enclave tracks for running multiple versions simultaneously:
module "eks" {
source = "git@github.com:worldcoin/terraform-aws-eks?ref=v7.11.0"
# ... other configuration ...
# Multiple enclave tracks for different versions
enclave_tracks = {
v1 = {
autoscaling_group = {
size = 3 # Spreads across AZs
min_size = 3
max_size = 9
}
instance_type = "m7a.4xlarge"
cpu_allocation = "8"
memory_allocation = "8192"
}
v2 = {
autoscaling_group = {
size = 1
min_size = 0
max_size = 3
}
instance_type = "m7a.2xlarge"
# Uses default cpu/memory allocation if not specified
}
}
}Each track creates:
- Dedicated ASG with nodes spread across availability zones
- Node labels:
enclave.tools/track=<track_name> - Node taints:
enclave.tools/track=<track_name>:NoSchedule
Deploy workloads to specific tracks using nodeSelector:
nodeSelector:
aws-nitro-enclaves-k8s-dp: enabled
enclave.tools/track: "stable"
tolerations:
- key: "enclave"
operator: "Exists"
effect: "NoExecute"For detailed enclave tracks documentation, see ENCLAVE_TRACKS.md.
Monitoring the cluster using Datadog is also included, enabled by default, by using terraform-datadog-kubernetes.
The module comes with the IAM role for Amazon EFS CSI driver, and can be enabled using efs_csi_driver_enabled variable. Also with the role, it will create an instance of Elastic File System (EFS) and mount it to the cluster as a StorageClass named efs.
-
Module from begining, has defined
kubernetesprovider inside on it, configured based on information from terraform resourceaws_eks_clusterto authenticate to the eks cluster. With this constrain onlycreateoperation work properly, other operationupdate,removedoesn't work. -
With version
v4.2.0we have change forkubernetesprovider. It's configured based on informatiom from data source aboutaws_eks_cluster, and if provider can't be configure with this way terraform resourceaws_eks_clusteris used. PR with: fix kubernetes provider. With this changecreateandupdateoperation work perfectly,removeoperation still doesn't work. -
In the feature versions of
terraform-aws-eks module,removeoperation can be fixed to work properly. For thiskubernetesprovider must be moved from module to workspace. It can be tested with PRs:
- remove kubernetes provider from terraform-aws-eks module
- test if remove kubernetes provider from tf module works
Works like a charm for any case, from begining.
Note
Starting with version 7.6.0, specifying the AWS region via the region input variable is required.
Users must provide a valid region string (lowercase letters and digits, optionally separated by single hyphens), for example us-east-1.
General steps to update EKS cluster:
-
check AWS documentation about EKS cluster update procedure
-
update information about addons versions in repository teraform-aws-eks and release new module version
locals {
# https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html
# aws eks describe-addon-versions --addon-name eks-pod-identity-agent | jq '.addons[0].addonVersions[0]'
eks_pod_identity_agent_version = {
"1.29" = "v1.3.5-eksbuild.2"
"1.30" = "v1.3.5-eksbuild.2"
"1.31" = "v1.3.5-eksbuild.2"
"1.32" = "v1.3.5-eksbuild.2"
}
}- upgrade addons for EKS clusters, by bump version of module teraform-aws-eks to the latest release
diff --git a/internal-tools/dev/us-east-1/eks.tf b/internal-tools/dev/us-east-1/eks.tf
index a95261645..9cf8d04b9 100644
--- a/internal-tools/dev/us-east-1/eks.tf
+++ b/internal-tools/dev/us-east-1/eks.tf
@@ -14,7 +14,7 @@ module "acm" {
}
module "eks" {
- source = "git@github.com:worldcoin/terraform-aws-eks?ref=v4.4.2"
+ source = "git@github.com:worldcoin/terraform-aws-eks?ref=v4.5.0"
cluster_name = format("tools-%s-%s", var.environment, data.aws_region.current.region)
cluster_version = "1.32"
- upgrade EKS control plane, by bump version in variable cluster_version for each cluster
--- a/internal-tools/dev/us-east-1/eks.tf
+++ b/internal-tools/dev/us-east-1/eks.tf
@@ -17,7 +17,7 @@ module "eks" {
source = "git@github.com:worldcoin/terraform-aws-eks?ref=v4.5.0"
cluster_name = format("tools-%s-%s", var.environment, data.aws_region.current.region)
- cluster_version = "1.31"
+ cluster_version = "1.32"
environment = var.environment
vpc_config = module.vpc.config
extra_role_mapping = module.sso_roles.default_mappings
Note
The control plane can only be updated by +1 version!!!
Note
Repeat this step many times to get right EKS cluster version
- observe node group rotation, after upgrade control-plane/launch-template/AMIs not always it is done automatically. From time to time manual operation is required here to kill pods with pdb/annotations.
Note
Kubelet compatibility is +3 versions, and node group rotation is not always required and can be done once in the end
Note
Please be carefull with EKS crypto nodes rotation, and don't do this without information on slack channel #planned-outages
Alternative approche is to leave them with status Read/SchedulingDisabled and wait when they rotate with apps deployment
- schedule
start-instance-refreshfor node group used to keep infrastructure pods
aws autoscaling start-instance-refresh --auto-scaling-group-name eks-node-tools-dev-us-east-1 --region us-east-1 --profile wld-internal-tools-dev --output json- observe
describe-instance-refreshesfor node group used to keep infrastructure pods
aws autoscaling describe-instance-refreshes --auto-scaling-group-name eks-node-tools-dev-us-east-1 --region us-east-1 --profile wld-internal-tools-dev --output jsonManual upgrade is required with below command, and terraform apply after execution.
aws eks update-cluster-version --region ... --name ... --kubernetes-version 1.29Works like a charm without of any manual operation. Just plan/apply workspace with TFE.
To remove the cluster you have to:
-
Delete ALL traefik SVCs and ingresses, example (keep in mind there could be more/less traefiks than in this example):
kubectl -n traefik delete svc traefik-alb --wait=false kubectl -n traefik patch svc traefik-alb -p '{"metadata":{"finalizers":null}}' --type=merge kubectl -n traefik-internal delete svc traefik-internal --wait=false kubectl -n traefik-internal patch svc traefik-internal -p '{"metadata":{"finalizers":null}}' --type=merge kubectl -n traefik delete ingress traefik-alb --wait=false kubectl -n traefik patch ingress traefik-alb -p '{"metadata":{"finalizers":null}}' --type=merge
-
Set these flags, the module will remove every usage of the Kubernetes provider and allow you to remove the cluster module without any errors.
efs_csi_driver_enabled = false kubernetes_provider_enabled = false
-
If above PR
applyfails (possible reason: race condition - aws_auth removed too soon), remove allkubernetes_*resources from state:terraform state list |grep kubernetes_ terraform state rm ... -
Manually remove LB deletion protection from AWS (both external and internal) before final delete
-
Remove module invocation to finally delete cluster itself.
-
If above PR
applyfails on deleting autoscalinggroups, terminate leftover instances and rerunapply(possible reason: race condition - karpenter didn't have enough time to clean instances)
| Name | Version |
|---|---|
| terraform | >= 1.9.0 |
| aws | >= 5.5 |
| cloudflare | >= 4.10 |
| datadog | >= 3.0 |
| kubernetes | >= 2.0 |
| random | >= 3.3 |
| tls | >= 4.0 |
| Name | Version |
|---|---|
| aws | >= 5.5 |
| cloudflare | >= 4.10 |
| datadog | >= 3.0 |
| kubernetes | >= 2.0 |
| random | >= 3.3 |
| tls | >= 4.0 |
| Name | Source | Version |
|---|---|---|
| alb | git@github.com:worldcoin/terraform-aws-alb.git | v0.19.0 |
| datadog_monitoring | git@github.com:worldcoin/terraform-datadog-kubernetes | v1.2.2 |
| nlb | git@github.com:worldcoin/terraform-aws-nlb.git | v1.1.1 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| access_entries | Map of access entries to add to the cluster | map(object({ |
{} |
no |
| acm_extra_arns | ARNs of ACM certificates used for TLS, attached as additional certificates to the ALB | list(string) |
[] |
no |
| additional_cluster_security_group_rules | Additional cluster security group rules | list(object({ |
[] |
no |
| additional_open_ports | Additional ports accessible from the Internet for the ALB | set(object({ |
[] |
no |
| additional_security_group_rules | Additional security group rules | list(object({ |
[] |
no |
| alb_additional_node_ports | List of node ports which are accessible by ALB | list(number) |
[] |
no |
| alb_idle_timeout | The time in seconds that the connection is allowed to be idle | number |
60 |
no |
| alb_logs_bucket_id | The ID of the S3 bucket to store logs in for ALB. | string |
n/a | yes |
| argocd_role_arn | The ARN of the remote ArgoCD role used to assume eks-cluster role | string |
null |
no |
| authentication_mode | The authentication mode for the cluster. Valid values are CONFIG_MAP, API or API_AND_CONFIG_MAP |
string |
"API_AND_CONFIG_MAP" |
no |
| aws_autoscaling_group_enabled | Whether to enable AWS Autoscaling group | bool |
true |
no |
| aws_load_balancer_iam_role_enabled | Whether to enable the IAM role for the AWS Load Balancer | bool |
true |
no |
| cluster_endpoint_public_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled | bool |
false |
no |
| cluster_name | The name of the cluster. Has to be unique per region per account. | string |
n/a | yes |
| cluster_version | The Kubernetes version to use for the cluster. | string |
"1.32" |
no |
| coredns_max_replicas | Maximum number of replicas for CoreDNS | number |
10 |
no |
| coredns_min_replicas | Minimum number of replicas for CoreDNS | number |
2 |
no |
| datadog_api_key | Datadog API key. Stored in kube-system namespace as a secret. | string |
n/a | yes |
| deploy_desired_vs_status_critical | Threshold for critical for Desired pods vs current pods (Deployments) | number |
10 |
no |
| deploy_desired_vs_status_evaluation_period | Evaluation period for Desired pods vs current pods (Deployments) | string |
"last_15m" |
no |
| deploy_desired_vs_status_warning | Threshold for warning for Desired pods vs current pods (Deployments) | number |
1 |
no |
| dockerhub_pull_through_cache_repositories_arn | The ARN of the repositories to allow the EKS node group to pull images from the DockerHub pull-through cache. | string |
"arn:aws:ecr:us-east-1:507152310572:repository/docker-cache/*" |
no |
| drop_invalid_header_fields | Drop invalid header fields | bool |
false |
no |
| efs_csi_driver_enabled | Whether to enable the EFS CSI driver (IAM Role & StorageClass). | bool |
false |
no |
| eks_node_group | Configuration for EKS node group | object({ |
null |
no |
| enclaves | Enabling Nitro Enclaves for the cluster | bool |
false |
no |
| enclaves_autoscaling_group | Configuration for Nitro Enclaves autoscaling group | object({ |
{} |
no |
| enclaves_cpu_allocation | Number of CPUs to allocate for Nitro Enclaves per node | string |
"4" |
no |
| enclaves_instance_type | Instance type for Nitro Enclaves | string |
"m7a.2xlarge" |
no |
| enclaves_memory_allocation | Memory in MiB to allocate for Nitro Enclaves per node | string |
"4096" |
no |
| enclave_tracks | Additional enclave tracks for multi-version deployments. Key is used as track identifier. | map(object({ |
{} |
no |
| environment | Environment of cluster | string |
n/a | yes |
| external_alb_enabled | Internal Network load balancers to create. If true, the NLB will be created. | bool |
true |
no |
| external_check_locations | List of DD locations to check cluster availability from | list(string) |
[ |
no |
| external_tls_listener_version | The version of the TLS listener to use for external ALB. | string |
"1.3" |
no |
| extra_nlb_listeners | List with configuration for additional listeners | list(object({ |
[] |
no |
| extra_role_mapping | Extra role mappings to add to the aws-auth configmap. | list(object({ |
[] |
no |
| gha_cidr | GitHub Actions CIDR block | string |
"10.0.96.0/20" |
no |
| http_put_response_hop_limit | The maximum number of hops allowed for HTTP PUT requests. Must be between 1 and 64. | number |
2 |
no |
| internal_nlb_acm_arn | The ARN of the certificate to use for internal NLB. | string |
"" |
no |
| internal_nlb_enabled | Internal Network load balancers to create. If true, the NLB will be created. | bool |
true |
no |
| internal_tls_listener_version | The version of the TLS listener to use for internal NLB. | string |
"1.3" |
no |
| kube_ops_enabled | Whether to create a role and association for kube-ops | bool |
true |
no |
| kubelet_extra_args | kubelet extra args to pass to the node group | string |
"--register-with-taints=critical:NoExecute" |
no |
| kubernetes_provider_enabled | Whether to create a Kubernetes provider for the cluster. Use as a prerequisite to cluster removal. | bool |
true |
no |
| memory_limits_low_perc_enabled | Enable memory limits low percentage alert | bool |
false |
no |
| monitoring_enabled | Whether to enable monitoring (Datadog). | bool |
true |
no |
| monitoring_notification_channel | The Datadog notification channel to use for monitoring alerts. | string |
"@slack-TFH-infrastructure-alerts" |
no |
| monitoring_reachability_fail_locations | Number of locations to fail to trigger the reachability test | number |
5 |
no |
| monitoring_reachability_failure_duration | Time after first error when the reachability test is triggered | number |
300 |
no |
| node_instance_profile_inline_policies | Inline policies to attach to the node instance profile. Key is the name of the policy, value is the policy document. | map(string) |
{} |
no |
| node_monitoring_agent_enabled | Enable node monitoring agent | bool |
false |
no |
| on_demand_base_capacity | The number of minimum on-demand instances to launch. | number |
1 |
no |
| open_to_all | Set to true if you want to open the cluster to all traffic from internet |
bool |
false |
no |
| region | AWS Region | string |
n/a | yes |
| s3_mountpoint_csi_driver_enabled | Whether to enable the S3 mountpoint CSI driver | bool |
false |
no |
| s3_mountpoint_csi_s3_bucket_arns | List of S3 bucket ARNs to allow access from the S3 mountpoint CSI driver | list(string) |
[ |
no |
| static_autoscaling_group | Configuration for static autoscaling group | object({ |
null |
no |
| storage_class | Configuration for the storage class that defines how volumes are allocated in Kubernetes. | object({ |
{ |
no |
| tfe_cidr | Terraform Enterprise CIDR block | string |
"10.52.160.0/20" |
no |
| traefik_cert_arn | The ARN of the certificate to use for Traefik. | string |
null |
no |
| traefik_nlb_service_ports | List of additional ports for treafik k8s service | list(object({ |
[] |
no |
| use_private_subnets_for_internal_nlb | Set to true if you want to use private subnets for internal NLB |
bool |
false |
no |
| vpc_cni_enable_pod_eni | Enable pod ENI support | bool |
true |
no |
| vpc_cni_enable_prefix_delegation | Enable prefix delegation for IPv6, allocate IPs in /28 blocks (instead of all at once) | bool |
false |
no |
| vpc_cni_external_snat | Needed to enable cross-vpc pod-to-pod communication - see: https://github.com/aws/amazon-vpc-cni-k8s?tab=readme-ov-file#aws_vpc_k8s_cni_externalsnat | string |
false |
no |
| vpc_cni_pod_security_group_enforcing_mode | Set pod security group enforcing mode | string |
"standard" |
no |
| vpc_cni_version_override | The version of the VPC CNI plugin to use. If not specified, the default version for the cluster version will be used. | string |
"" |
no |
| vpc_cni_warm_eni_target | Number of ENIs to keep warm for each node to speed up pod scheduling | string |
"1" |
no |
| vpc_cni_warm_ip_target | Number of IPs to keep warm for each node to speed up pod scheduling | string |
"8" |
no |
| vpc_config | VPC configuration from aws/vps module | object({ |
n/a | yes |
| wafv2_arn | The ARN of the WAFv2 WebACL to associate with the ALB | string |
"" |
no |
| Name | Description |
|---|---|
| alb_arn | An ARN of the main ALB (traefik) |
| alb_arns | Map of ARNs of the ALBs |
| alb_dns_name | A dns name of the main ALB (traefik) |
| alb_dns_names | Map of dns names of the ALBs |
| cluster_certificate_authority_data | Base64 encoded certificate data required to communicate with the cluster |
| cluster_endpoint | Endpoint for your Kubernetes API server |
| cluster_oidc_issuer_url | The OIDC issuer URL for the EKS cluster |
| name | The name of the cluster |
| nlb_arns | Map of ARNs of the NLBs |
| nlb_dns_names | Map of dns names of the NLBs |
| nlb_zone_ids | Map of zone IDs of the NLBs |
| node_security_group_id | The security group ID of the EKS nodes |