-
Notifications
You must be signed in to change notification settings - Fork 0
Setup: Azure
This repository contains IaC (Infrastructure as Code) for setting up and running the ACE platform on Public Azure Subscription using HashiCorp Terraform.
- Linux-based shell.
-
Azure CLI installed.
- az account clear
- az login
- az account set --subscription <target-subscription>
- Terraform installed.
- Cloned git repository from evry-bergen/ace-platform
We need a centralized location to save our Terraform backend state files. So here we will create a storage location in our Azure subscription that the Terraform backend will use to read and lock the state files, so that we don't accidentally perform a simultaneous Terraform action from another location.
Run the following script for creating a new storage account (SA) inside your Azure Subscription:
bash tools/create-tf-sa.sh <env>This script will do the following:
- Create a resource group in given
TF_SA_LOCATIONnamedaceplatformstate - Create the storage account:
aceplatformstate - Create a storage container named
terraform-stateinside of the storage account - Print the details about the created resources by the script.
When the script is finished, it will print out two variables which is the credentials that Terraform backend will use to access the Azure Storage Account.
Terraform makes use of environment variables in you local shell, so for convenience we can create a .env file where we fill in the variables as we go and just run source .env when we want to reload the variables.
Create the .env file and paste the output from the create-tf-sa.sh script in their respective locations:
export AZURE_STORAGE_ACCOUNT=<sa-name>
export AZURE_STORAGE_KEY=<sa-key>You can edit the parameters inside tools/create-tf-sa.sh to meet your project needs. There are however some rules that you need to follow when you are making changes in Azure.
First off, the DNS name and SA name cannot be the same, use a shorted name for the DNS service name. Or else you will get a duplicate error while creating the SA.
Secondly, storage account names must be unique across all Azure accounts, not just your own. So if you pick something generic, or a name similar to an earlier project. You will get a 404 error while creating the SA, without any detailed feedback as Azure want to prevent information leaking regarding SA's.
During a terraform init, the root configuration directory is consulted for the backend configuration. Terraform scans the root folder for .tf files and caches the configuration for further use.
In practice this means that re-running terraform init will update the working directory to use the new backend settings made in .tf files. By default previously installed modules will not be updated during a init, we will make us of the -upgrade parameter during init so that we ensure that we are running the newest modules.
| providers.tf | Specifies the provider to use, in ACE we make use of the azurerm provider. This can be changed to make ACE compatible with other providers. |
| modules.tf |
Specifics the modules that ACE should use. Here we point to other repositories on github, containing the modules that we want to include. Example modules: |
| variables.tf | Populate with variables that our modules in modules.tf will use. |
| outputs.tf | Used for outputting larger or combined strings with simple references in other modules. |
terraform init \
-backend-config="access_key=$AZURE_STORAGE_KEY" \
-backend-config="storage_account_name=$AZURE_STORAGE_ACCOUNT" \
-upgrade
Now that we have initialized the backend, we will create a terraform workspace so that Terraform can store persistent data. This command will create a .terraform folder and download the necessary modules and plugins based on the providers and modules that we specified earlier.
$ terraform workspace new <env>
We need to create a administrator API user for our Azure subscription, so that Terraform can manage and create resources.
This will be done using the Azure portal, so start by log into your desired user at https://portal.azure.com.
Before moving on to creating a user, we first need to generate a SSH key which our Terraform backend will use.
Run the following command to generate the keys: (Store the key in root folder, don't specify a passphrase.)
ssh-keygen -t rsa -b 4096 -C "<cluster>@<corp>.com"We can now add the public key (the file ending in .pub) to our .env, append the following, including the content of the .pub file:
export TF_VAR_ssh_public_key="<publickey>"Terraform will add and use this SSH key for some of it's backend services.
Navigate to Azure Active Directory -> App registrations and press New application registration.
The Terraform makes use of the Azure API using the azurerm plugin, so we want to create a new user with the application type: Web app / API. Type in a sensible name, such as tf-admin and type http://locahost as the Sign-on URL.
Azure will now automatically navigate to the newly created user, and you should now see the Application ID which will be our client_id variable.
The next step is to create a key for our user, press Settings -> Keys. Type client_secret inside Key description, choose that the key should never expire and press Save. The key will be generated and should now be visible.
Press Save to finalize.
We can now add the variables to our .env file, append the following, replace with Application ID and Key value:
export TF_VAR_client_id=<application_id>
export TF_VAR_client_secret=<generated_key>Remember to run source .env after you save.
Select your desired subscription using: https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade
Here you can see the dashboard for your subscription. Since we want to add a new administrator to our subscription, navigate to Access control (IAM), press the + Add button and press Add role assignment.
A sidebar opens on the right side, where you first select the Role Owner since this is the only role that have access to creating new resources. Leave assign access to Azure AD user and search for your newly created user in the last Select field.
Press on the user that popped up, and it should now look similar to this:
Press Save and Azure should now assign the user to Owners and give a popup in the upper right corner when it has finished.
We will now create credentials that Terraform will use to manage Kubernetes, here we will use RBAC (role-based access control) which can be generated by running the following script:
cd tools/
bash create-rbac-sp.sh <env>This script will print out 4 variables, append the following to .env:
export TF_VAR_aks_rbac_client_app_id=
export TF_VAR_aks_rbac_server_app_id=
export TF_VAR_aks_rbac_server_app_secret=
export TF_VAR_tenant_id=The Tenant ID is your Azure subscription's GUID which will be used by the Terraform backend while creating the Kubernetes cluster.
Optional: To find your Tenant ID in the Azure portal, navigate to: Dashboard -> Azure Active Directory -> Properties.
Here you will see the Directory ID which is your Tenant ID.
Further reading: https://docs.microsoft.com/en-us/azure/aks/aad-integration
Now that we have sorted out all the required variables for the Terraform deployment. We can review the recipe that Terraform will use to deploy our IaC (Infrastructure as Code).
Make sure to run the source .env command before continuing, as Terraform needs the variables exported to our local shell.
Run the following command to make Terraform plan, without deploying our configuration:
terraform plan -var-file env/<env>.tfvarsReplace <env> with your desired environment.
Terraform will now print out the plan for reaching the desired state which is the configuration we have given Terraform. If you look at one of the last lines printed, you will see something similar to this:
Plan: 6 to add, 0 to change, 0 to destroy.When upgrading a existing Terraform configuration, the desired state will differ from the current state. If you run a terraform plan in such a case, Terraform will plan how to upgrade the IaC to meet the desired state.
So in this case, Terraform will perform 6 actions. All which is creation of infrastructure.
When you have reviewed the terraform plan, it's time to deploy the changes to our subscription.
terraform apply -var-file env/<env>.tfvarsCreating a new Azure AKS cluster can take up to 10-15 minutes.
**OUTDATED: Review this**The kubernetes cluster by design does not have any user/group authentication, so we need to rely on another service for creating a group for users and admins. Integrating group based authentication will make it easier to administer users down the road.
For the ACE-Platform we will make use of Azure's Active Directory.
Navigate to Azure Portal --> Azure Active Directory -> Groups and create a new Group.
We need to create two security groups, and save the group ID's in the projects documentation for later use. Create the following security groups:
ace--usersace--admins
Now that we have created the groups, we can assign roles for managing the Kubernetes cluster.
Navigate to Azure Portal -> All Resources -> The newly created kubernetes cluster -> Access control -> Role assignments -> Add.
Add the role Azure Kubernetes Service Cluster Admin Role and select the ace--admins group. Remember to press on the group name, as it is not added automatically.
Do the same for the role Azure Kubernetes Service Cluster User Role and the ace--users group.
Save kubernetes config file to ~/.kube/<cluster>
terraform output -module=aks kube_config > ~/.kube/ace_cluster
Set KUBECONFIG environment variable to the kubernetes config file
export KUBECONFIG=~/.kube/ace_cluster
kubectl get nodes
NAME STATUS ROLES AGE VERSION aks-default-75135322-0 Ready agent 23m v1.9.6 aks-default-75135322-1 Ready agent 23m v1.9.6 aks-default-75135322-2 Ready agent 23m v1.9.6
- https://docs.microsoft.com/bs-latn-ba/azure/aks/aad-integration#create-server-application
- https://github.com/Azure/azure-sdk-for-go/issues/2089 https://github.com/terraform-providers/terraform-provider-azurerm/issues/2076
- https://github.com/mjisaak/azure-active-directory/blob/master/README.md


