Skip to content
This repository was archived by the owner on Jan 15, 2025. It is now read-only.

Setup: Azure

Fredrick Myrvoll edited this page Mar 22, 2019 · 3 revisions

This repository contains IaC (Infrastructure as Code) for setting up and running the ACE platform on Public Azure Subscription using HashiCorp Terraform.

1. Prerequisites

2. Generate Azure Storage Account

We need a centralized location to save our Terraform backend state files. So here we will create a storage location in our Azure subscription that the Terraform backend will use to read and lock the state files, so that we don't accidentally perform a simultaneous Terraform action from another location.

Run the following script for creating a new storage account (SA) inside your Azure Subscription:

bash tools/create-tf-sa.sh <env>

This script will do the following:

  • Create a resource group in given TF_SA_LOCATION named aceplatformstate
  • Create the storage account: aceplatformstate
  • Create a storage container named terraform-state inside of the storage account
  • Print the details about the created resources by the script.

When the script is finished, it will print out two variables which is the credentials that Terraform backend will use to access the Azure Storage Account.

Terraform makes use of environment variables in you local shell, so for convenience we can create a .env file where we fill in the variables as we go and just run source .env when we want to reload the variables.

Create the .env file and paste the output from the create-tf-sa.sh script in their respective locations:

export AZURE_STORAGE_ACCOUNT=<sa-name>
export AZURE_STORAGE_KEY=<sa-key>

2.1. Customizing Storage Account names

You can edit the parameters inside tools/create-tf-sa.sh to meet your project needs. There are however some rules that you need to follow when you are making changes in Azure.

First off, the DNS name and SA name cannot be the same, use a shorted name for the DNS service name. Or else you will get a duplicate error while creating the SA.

Secondly, storage account names must be unique across all Azure accounts, not just your own. So if you pick something generic, or a name similar to an earlier project. You will get a 404 error while creating the SA, without any detailed feedback as Azure want to prevent information leaking regarding SA's.

3. Initialize the Terraform backend

During a terraform init, the root configuration directory is consulted for the backend configuration. Terraform scans the root folder for .tf files and caches the configuration for further use.

In practice this means that re-running terraform init will update the working directory to use the new backend settings made in .tf files. By default previously installed modules will not be updated during a init, we will make us of the -upgrade parameter during init so that we ensure that we are running the newest modules.

3.1. ACE terraform configuration files

providers.tf Specifies the provider to use, in ACE we make use of the azurerm provider. This can be changed to make ACE compatible with other providers.
modules.tf

Specifics the modules that ACE should use. Here we point to other repositories on github, containing the modules that we want to include. Example modules: tf-azure-aks , tf-helm-traefik or tf-helm-prometheus.

variables.tf Populate with variables that our modules in modules.tf will use.
outputs.tf Used for outputting larger or combined strings with simple references in other modules.

3.2. Run the following command to initialize the backend

terraform init \
  -backend-config="access_key=$AZURE_STORAGE_KEY" \
  -backend-config="storage_account_name=$AZURE_STORAGE_ACCOUNT" \
  -upgrade

4. Terraform workspace

Now that we have initialized the backend, we will create a terraform workspace so that Terraform can store persistent data. This command will create a .terraform folder and download the necessary modules and plugins based on the providers and modules that we specified earlier.

4.1. Run the following command to create a new workspace

$ terraform workspace new <env>

5. Preparing Azure subscription for Terraform

We need to create a administrator API user for our Azure subscription, so that Terraform can manage and create resources.

This will be done using the Azure portal, so start by log into your desired user at https://portal.azure.com.

5.1. Generate SSH Key

Before moving on to creating a user, we first need to generate a SSH key which our Terraform backend will use.
Run the following command to generate the keys: (Store the key in root folder, don't specify a passphrase.)

ssh-keygen -t rsa -b 4096 -C "<cluster>@<corp>.com"

We can now add the public key (the file ending in .pub) to our .env, append the following, including the content of the .pub file:

export TF_VAR_ssh_public_key="<publickey>"

Terraform will add and use this SSH key for some of it's backend services.

5.3. Create a new service principal.

Navigate to Azure Active Directory -> App registrations and press New application registration.

The Terraform makes use of the Azure API using the azurerm plugin, so we want to create a new user with the application type: Web app / API.
Type in a sensible name, such as tf-admin and type http://locahost as the Sign-on URL.

Azure will now automatically navigate to the newly created user, and you should now see the Application ID which will be our client_id variable.

The next step is to create a key for our user, press Settings -> Keys. Type client_secret inside Key description, choose that the key should never expire and press Save. The key will be generated and should now be visible.

Screenshot-2019-03-05-at-14.31.13.png

Press Save to finalize.

We can now add the variables to our .env file, append the following, replace with Application ID and Key value:

export TF_VAR_client_id=<application_id>
export TF_VAR_client_secret=<generated_key>

Remember to run source .env after you save.

5.4. Grant permissions to existing service principal.

Select your desired subscription using: https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade
Here you can see the dashboard for your subscription. Since we want to add a new administrator to our subscription, navigate to Access control (IAM), press the + Add button and press Add role assignment.

Screenshot-2019-03-05-at-15.55.11.png

A sidebar opens on the right side, where you first select the Role Owner since this is the only role that have access to creating new resources. Leave assign access to Azure AD user and search for your newly created user in the last Select field.

Press on the user that popped up, and it should now look similar to this:

Screenshot-2019-03-05-at-16.00.19.png

Press Save and Azure should now assign the user to Owners and give a popup in the upper right corner when it has finished.

5.5. RBAC Service Principal for managing Kubernetes

We will now create credentials that Terraform will use to manage Kubernetes, here we will use RBAC (role-based access control) which can be generated by running the following script:

cd tools/
bash create-rbac-sp.sh <env>

This script will print out 4 variables, append the following to .env:

export TF_VAR_aks_rbac_client_app_id=
export TF_VAR_aks_rbac_server_app_id=
export TF_VAR_aks_rbac_server_app_secret=
export TF_VAR_tenant_id=

The Tenant ID is your Azure subscription's GUID which will be used by the Terraform backend while creating the Kubernetes cluster.

Optional: To find your Tenant ID in the Azure portal, navigate to: Dashboard -> Azure Active Directory -> Properties.
Here you will see the Directory ID which is your Tenant ID.

Further reading: https://docs.microsoft.com/en-us/azure/aks/aad-integration

6. Terraform Plan

Now that we have sorted out all the required variables for the Terraform deployment. We can review the recipe that Terraform will use to deploy our IaC (Infrastructure as Code).

Make sure to run the source .env command before continuing, as Terraform needs the variables exported to our local shell.

Run the following command to make Terraform plan, without deploying our configuration:

terraform plan -var-file env/<env>.tfvars

Replace <env> with your desired environment.

Terraform will now print out the plan for reaching the desired state which is the configuration we have given Terraform. If you look at one of the last lines printed, you will see something similar to this:

Plan: 6 to add, 0 to change, 0 to destroy.

When upgrading a existing Terraform configuration, the desired state will differ from the current state. If you run a terraform plan in such a case, Terraform will plan how to upgrade the IaC to meet the desired state.

So in this case, Terraform will perform 6 actions. All which is creation of infrastructure.

7. Terraform Apply

When you have reviewed the terraform plan, it's time to deploy the changes to our subscription.

terraform apply -var-file env/<env>.tfvars

Creating a new Azure AKS cluster can take up to 10-15 minutes.

8. Kubernetes RBAC groups

**OUTDATED: Review this**The kubernetes cluster by design does not have any user/group authentication, so we need to rely on another service for creating a group for users and admins. Integrating group based authentication will make it easier to administer users down the road.

For the ACE-Platform we will make use of Azure's Active Directory.
Navigate to Azure Portal --> Azure Active Directory -> Groups and create a new Group.

We need to create two security groups, and save the group ID's in the projects documentation for later use. Create the following security groups:

  1. ace--users
  2. ace--admins

Now that we have created the groups, we can assign roles for managing the Kubernetes cluster.

Navigate to Azure Portal -> All Resources -> The newly created kubernetes cluster -> Access control -> Role assignments -> Add.

Add the role Azure Kubernetes Service Cluster Admin Role and select the ace--admins group. Remember to press on the group name, as it is not added automatically.
Do the same for the role Azure Kubernetes Service Cluster User Role and the ace--users group.

9. Kubeconfig

Save kubernetes config file to ~/.kube/<cluster>

terraform output -module=aks kube_config > ~/.kube/ace_cluster

Set KUBECONFIG environment variable to the kubernetes config file

export KUBECONFIG=~/.kube/ace_cluster
kubectl get nodes
NAME                     STATUS    ROLES     AGE       VERSION
aks-default-75135322-0   Ready     agent     23m       v1.9.6
aks-default-75135322-1   Ready     agent     23m       v1.9.6
aks-default-75135322-2   Ready     agent     23m       v1.9.6

10. Further reading

Notes on RBAC SP creation