This exercise will challenge you to create a 'basic' multi-tier application on AWS.
The application will be a webapp that uses a Cloud-managed Postgres instance, but will also require the provisioning of attendant networking and security infrastructure.
In total, the exercise will step through provisioning:
-
using existing VPC and subnet network resources
-
load balancer
-
application instances
-
an RDS Postgres database instance
-
network and security groups configuration that permits:
- ingress of http(s) traffic to the load balancer from the Internet
- http(s) traffic from the lb to the app instances
- ssh access to the app instances from the Internet
- access from the application instances to the database
-
credential management
- a key pair for use with app instances
- master and IAM credential information for the database instance
The final deployment will look like:
Create a config.tf file with an AWS provider specified to use us-east-1
Export environment variables for:
AWS_DEFAULT_REGION=us-east-1AWS_ACCESS_KEY_ID=<your api access key>AWS_SECRET_ACCESS_KEY=<your secret access key>
Create an empty main.tf file.
Run:
terraform initterraform plan
Expected Result: Terraform should initialize plugins and report zero resource additions, modifications, and deletions
Declare variables for:
- name - your name/userid, e.g.
jsmith - vpc_id - the id of the VPC network to use, e.g. vpc-58a29221
Create a terraform.tfvars file and se the value of the name variable in that file.
Use aws_subnet_ids data provider to resolve the subnet ids in the region's default VPC.
Hint: default VPC id for region is available on the EC2 Dashboard, e.g. vpc-58a29221
Create the following security groups:
- public-web - a security group that permits http and https access from the public Internet (tcp ports 80 & 443)
- public-ssh - a security group that permits ssh access from the public Internet (tcp port 22)
- internal-web - a security group that permits http access only from sources in the VPC (tcp port 80)
- outbound - a security group that permits access from the VPC to the Internet
Create an AWS Key Pair using an ssh keypair.
Aside: how to generate an keypair ssh-keygen -t rsa -f exercise.id_rsa # do not specify a passphrase
Hint: Use the file function in Terraform interpolation syntax
Run terraform plan
Expected Result: 1 to add, 0 to change, 0 to destroy.
Run terraform apply
Create one t2.medium EC2 instance using Amazon ECS Optimized Linux ami details:
- ami id in us-east-1: ami-fad25980
- name: amzn-ami-2018.03.d-amazon-ecs-optimized
The EC2 instance should:
- reference and use the generated keypair
- be launched into one of the default vpc subnets
- have a public IP
- be publicly-accessible only via ssh
- a tag of
Name=exercise-<yourname>
Hint: subnet_id = "${element(data.aws_subnet_ids.default_vpc.ids, 0)}"
Inspect Terraform state:
head terraform.tfstate
What format does this look like?
Find your the instance you just created and look at it in AWS EC2 console:
grep i-.* terraform.tfstate
Attach the public-ssh, internal-web, and outbound security groups to the ec2 instance.
You should now be able to login to the instance via ssh with:
ssh -i ./exercise.id_rsa ec2-user@<public DNS>
e.g.
ssh -i ./exercise.id_rsa ec2-user@ec2-107-23-217-33.compute-1.amazonaws.com
__| __| __|
_| ( \__ \ Amazon ECS-Optimized Amazon Linux AMI 2018.03.d
____|\___|____/
For documentation visit, http://aws.amazon.com/documentation/ecs
Reconfigure Terraform's state storage backend to use s3:
terraform {
backend "s3" {
bucket = "qm-training-cm-us-east-1"
key = "infra/terraform/qm-sandbox/us-east-1/cm/exercise-<your name>.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "TerraformStateLock"
}
}
Run terraform init again to re-initialize the state storage backend.
Create an Elastic Load Balancer.
Connect ELB to app instance and specify a health check to / on port 80
Share the location of the ELB and App Server via a module output.
Open-up the ELB location in a web browser!
Use the Terraform community RDS module to instantiate a small Postgres DB:
engine = "postgres"
engine_version = "9.6.3"
instance_class = "db.t2.micro"
allocated_storage = 5
Navigate to the /counter path on the ELB. Is it counting?
Consider that we might want to have multiple instances...
Add the 'count' field to the aws_instance resource definition, set to 1. Reference count.index in subnet lookup.
Testing modules locally can be accomplished using a series of Make tasks
contained in this repo.
| Make Task | What happens |
|---|---|
| all | Execute the canonical build for the generic infrastructure module (does not destroy infra) |
| converge | Execute kitchen converge for all modules |
| lint | Execute tflint for generic infrastructure module |
| test | Execute kitchen test --destroy=always for all modules |
| verify | Execute kitchen verify for all modules |
| destroy | Execute kitchen destroy for all modules |
| kitchen | Execute kitchen <command>. Specify the command with the COMMAND argument to make |
e.g. run a single test: make kitchen COMMAND="verify minimal-aws"
Typical Workflow:
- Start-off with a clean slate of running test infrastructure:
make destroy; make all - Make changes and (repeatedly) run:
make converge && make verify - Rebuild everything from scratch:
make destroy; make all - Commit and issue pull request
Test Kitchen uses the concept of "instances" as it's medium for multiple test
packages in a project.
An "instance" is the combination of a test suite and a platform.
This project uses a single platform for all specs (e.g. aws).
The name of this platform actually doesn't matter since the terraform provisioner
and driver are not affected by it.
You can see the available test instances by running the kitchen list command:
$ make kitchen COMMAND=list
Instance Driver Provisioner Verifier Transport Last Action Last Error
default-aws Terraform Terraform Terraform Ssh VerifiedTo run Test Kitchen processes for a single instance, you must use the kitchen
target from the make file and pass the command and the instance name using the
COMMAND variable to make.
# where 'default-aws is an Instance name from kitchen's list
$ make kitchen COMMAND="converge default-aws"