Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
189 changes: 189 additions & 0 deletions skills/test-ami-eks/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,189 @@
---
name: test-ami-eks
description: Register a Bottlerocket AMI and launch it on EKS for validation
---

# Test AMI on EKS

Register a Bottlerocket image as an AMI, track it, and launch it on an EKS cluster for validation.

## When to Use

- After building a Bottlerocket variant image
- To validate custom packages or configurations work on real infrastructure
- For integration testing before release

## Prerequisites

- Built Bottlerocket image in `bottlerocket/build/images/`
- AWS credentials with EC2 and EKS permissions
- `eksctl`, `kubectl`, `aws` CLI installed
- `jq` for JSON parsing

## Procedure

### Stage 1: Register AMI

From the grove root (or bottlerocket directory):

```bash
# Auto-detect variant from latest build
./skills/test-ami-eks/register-ami.sh

# Or specify variant explicitly
./skills/test-ami-eks/register-ami.sh --variant aws-k8s-1.34

# Custom options
./skills/test-ami-eks/register-ami.sh \
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These paths should be relative to the skill itself.

Anthropic's guidance says that packaged scripts should go under scripts/ inside the skill directory. I haven't been great at respecting this yet though.

--variant aws-k8s-1.34 \
--arch x86_64 \
--region us-west-2 \
--tracking ./test_builds.toml
```

Outputs the AMI ID on the last line for piping.

### Stage 2: Launch on EKS

```bash
# Launch with AMI ID from register step
# NOTE: This will delete any existing nodegroups in the cluster first
./skills/test-ami-eks/launch-eks-nodegroup.sh --ami ami-0123456789abcdef0

# Full options
./skills/test-ami-eks/launch-eks-nodegroup.sh \
--ami ami-0123456789abcdef0 \
--cluster br-test-cluster \
--nodegroup br-test-ng \
--region us-west-2 \
--instance-type m5.large \
--capacity 2

# Keep existing nodegroups (don't auto-delete)
./skills/test-ami-eks/launch-eks-nodegroup.sh \
--ami ami-0123456789abcdef0 \
--keep-old
```

Creates cluster if it doesn't exist, then creates nodegroup.

### Stage 3: Validate

```bash
# Check nodes joined
kubectl get nodes -o wide

# Check Bottlerocket version
kubectl get nodes -o jsonpath='{.items[*].status.nodeInfo.osImage}'

# SSH via admin container (if enabled)
apiclient exec admin
```

### Stage 4: Cleanup

```bash
# Delete nodegroup only (keep cluster for reuse)
./skills/test-ami-eks/cleanup-nodegroup.sh \
--cluster br-test-cluster \
--nodegroup br-test-ng

# Delete cluster too
./skills/test-ami-eks/cleanup-nodegroup.sh \
--cluster br-test-cluster \
--nodegroup br-test-ng \
--delete-cluster
```

### List Tracked Builds

```bash
# Show all builds
./skills/test-ami-eks/list-builds.sh --tracking ./test_builds.toml

# Last 5 builds
./skills/test-ami-eks/list-builds.sh --last 5

# Filter by variant
./skills/test-ami-eks/list-builds.sh --variant aws-k8s-1.34
```

## Files

| File | Purpose |
|------|--------|
| `register-ami.sh` | Register image as AMI, track in TOML |
| `launch-eks-nodegroup.sh` | Create cluster/nodegroup with AMI |
| `cleanup-nodegroup.sh` | Delete nodegroup/cluster |
| `list-builds.sh` | Display tracked builds |
| `eksctl-nodegroup.yaml.template` | eksctl config template |

## Tracking File Format

`test_builds.toml`:

```toml
default_region = "us-west-2"
default_cluster = "br-test-cluster"
default_instance_type = "m5.large"

[[builds]]
number = 1
variant = "aws-k8s-1.34"
arch = "x86_64"
region = "us-west-2"
timestamp = "2026-02-05T17:00:00Z"
ami_id = "ami-0123456789abcdef0"
ami_name = "aws-k8s-1.34-test-1"
```

## Validation Scripts

### Smoke Test via SSM

Run basic validation commands on nodes via SSM:

```bash
# Test all nodes in nodegroup
./skills/test-ami-eks/smoke-test.sh \
--cluster br-test-cluster \
--nodegroup br-test-ng

# Test specific instance
./skills/test-ami-eks/smoke-test.sh --instance i-0123456789abcdef0
```

Checks:
- SSM connectivity
- System reached multi-user target
- Basic system health

### Get Console Output

If SSM fails, check console output for boot issues:

```bash
# Get console output
./skills/test-ami-eks/get-console.sh --instance i-0123456789abcdef0

# Raw output only (for parsing)
./skills/test-ami-eks/get-console.sh --instance i-0123456789abcdef0 --raw
```

## Common Issues

**"No valid AWS credentials"**
- Run `aws sts get-caller-identity` to verify credentials
- Ensure credentials have EC2 and EKS permissions

**"Could not detect variant"**
- Specify `--variant` explicitly
- Ensure image was built in `bottlerocket/build/images/`

**"Nodegroup already exists"**
- Run `cleanup-nodegroup.sh` first to delete existing nodegroup

**Nodes not joining cluster**
- Check security groups allow node-to-control-plane communication
- Verify IAM roles have required policies
- Check `kubectl describe node` for errors
66 changes: 66 additions & 0 deletions skills/test-ami-eks/cleanup-nodegroup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
#!/bin/bash
set -euo pipefail

# Clean up EKS nodegroup (and optionally cluster)

usage() {
cat <<EOF
Usage: $(basename "$0") [OPTIONS]

Delete an EKS nodegroup

Options:
--cluster NAME Cluster name (default: br-test-cluster)
--nodegroup NAME Nodegroup name (default: br-test-ng)
--region REGION AWS region (default: us-west-2)
--delete-cluster Also delete the cluster
-h, --help Show this help
EOF
exit 1
}

# Defaults
CLUSTER_NAME="br-test-cluster"
NODEGROUP_NAME="br-test-ng"
REGION="us-west-2"
DELETE_CLUSTER=false

while [[ $# -gt 0 ]]; do
case $1 in
--cluster) CLUSTER_NAME="$2"; shift 2 ;;
--nodegroup) NODEGROUP_NAME="$2"; shift 2 ;;
--region) REGION="$2"; shift 2 ;;
--delete-cluster) DELETE_CLUSTER=true; shift ;;
-h|--help) usage ;;
*) echo "Unknown option: $1"; usage ;;
esac
done

if ! aws sts get-caller-identity &>/dev/null; then
echo "Error: No valid AWS credentials"
exit 1
fi

# Delete nodegroup
if eksctl get nodegroup --cluster "$CLUSTER_NAME" --region "$REGION" --name "$NODEGROUP_NAME" &>/dev/null; then
echo "Deleting nodegroup $NODEGROUP_NAME..."
eksctl delete nodegroup \
--cluster "$CLUSTER_NAME" \
--region "$REGION" \
--name "$NODEGROUP_NAME" \
--wait
echo "✓ Nodegroup deleted"
else
echo "Nodegroup $NODEGROUP_NAME not found"
fi

# Optionally delete cluster
if [[ "$DELETE_CLUSTER" == "true" ]]; then
if eksctl get cluster --name "$CLUSTER_NAME" --region "$REGION" &>/dev/null; then
echo "Deleting cluster $CLUSTER_NAME..."
eksctl delete cluster --name "$CLUSTER_NAME" --region "$REGION" --wait
echo "✓ Cluster deleted"
else
echo "Cluster $CLUSTER_NAME not found"
fi
fi
22 changes: 22 additions & 0 deletions skills/test-ami-eks/eksctl-nodegroup.yaml.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${CLUSTER_NAME}
region: ${REGION}
nodeGroups:
- name: ${NODEGROUP_NAME}
instanceType: ${INSTANCE_TYPE}
desiredCapacity: ${DESIRED_CAPACITY}
ami: ${AMI_ID}
amiFamily: Bottlerocket
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
bottlerocket:
settings:
host-containers:
admin:
enabled: true
58 changes: 58 additions & 0 deletions skills/test-ami-eks/get-console.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
#!/bin/bash
set -euo pipefail

# Get EC2 console output for debugging boot issues

usage() {
cat <<EOF
Usage: $(basename "$0") --instance INSTANCE_ID [OPTIONS]

Get EC2 console output (forces latest fetch)

Required:
--instance ID EC2 instance ID

Options:
--region REGION AWS region (default: us-west-2)
--raw Output raw console text only
-h, --help Show this help
EOF
exit 1
}

INSTANCE_ID=""
REGION="us-west-2"
RAW=false

while [[ $# -gt 0 ]]; do
case $1 in
--instance) INSTANCE_ID="$2"; shift 2 ;;
--region) REGION="$2"; shift 2 ;;
--raw) RAW=true; shift ;;
-h|--help) usage ;;
*) echo "Unknown option: $1"; usage ;;
esac
done

if [[ -z "$INSTANCE_ID" ]]; then
echo "Error: --instance is required"
usage
fi

if [[ "$RAW" == "true" ]]; then
aws ec2 get-console-output \
--instance-id "$INSTANCE_ID" \
--region "$REGION" \
--latest \
--query 'Output' \
--output text
else
echo "Console output for $INSTANCE_ID (region: $REGION)"
echo "================================================"
aws ec2 get-console-output \
--instance-id "$INSTANCE_ID" \
--region "$REGION" \
--latest \
--query 'Output' \
--output text
fi
Loading