From 14ce350ef4ca9fdcf163747c2254e8fd83227e8d Mon Sep 17 00:00:00 2001 From: "Sandesh, Pallapati Immanuel Prabhu" Date: Tue, 8 Nov 2022 11:31:15 +0530 Subject: [PATCH 1/2] EKS Provisioning Gitlab(WIP) --- .../setup-eks-provisioning-pipeline.asciidoc | 5 + .../setup-eks-provisioning-pipeline.asciidoc | 106 +++++++++++ .../setup-eks-provisioning-pipeline.asciidoc | 5 + .../pipelines/common/pipeline_generator.lib | 4 +- .../pipelines/gitlab/pipeline_generator.sh | 179 +++++++++++++++++ .../gitlab/templates/common/.gitlab-ci.yml | 12 ++ .../common/install-ingress-controller.sh | 4 + .../gitlab/templates/eks/eks-pipeline.cfg | 67 +++++++ .../eks/eks-provisioning.yml.template | 180 ++++++++++++++++++ .../gitlab/templates/eks/install-rancher.sh | 15 ++ .../gitlab/templates/eks/obtain-dns.sh | 8 + 11 files changed, 583 insertions(+), 2 deletions(-) create mode 100644 documentation/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc create mode 100644 documentation/src/common_templates/setup-eks-provisioning-pipeline.asciidoc create mode 100644 documentation/src/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc create mode 100644 scripts/pipelines/gitlab/pipeline_generator.sh create mode 100644 scripts/pipelines/gitlab/templates/common/.gitlab-ci.yml create mode 100644 scripts/pipelines/gitlab/templates/common/install-ingress-controller.sh create mode 100644 scripts/pipelines/gitlab/templates/eks/eks-pipeline.cfg create mode 100644 scripts/pipelines/gitlab/templates/eks/eks-provisioning.yml.template create mode 100644 scripts/pipelines/gitlab/templates/eks/install-rancher.sh create mode 100644 scripts/pipelines/gitlab/templates/eks/obtain-dns.sh diff --git a/documentation/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc b/documentation/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc new file mode 100644 index 000000000..7e88adfdb --- /dev/null +++ b/documentation/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc @@ -0,0 +1,5 @@ +:provider: Gitlab +:pipeline_type: pipeline +:path_provider: gitlab +:trigger_sentence_gitlab: This pipeline will be configured to be executed inside a CI pipeline +include::../../common_templates/setup-eks-provisioning-pipeline.asciidoc[] \ No newline at end of file diff --git a/documentation/src/common_templates/setup-eks-provisioning-pipeline.asciidoc b/documentation/src/common_templates/setup-eks-provisioning-pipeline.asciidoc new file mode 100644 index 000000000..60b1a3bab --- /dev/null +++ b/documentation/src/common_templates/setup-eks-provisioning-pipeline.asciidoc @@ -0,0 +1,106 @@ +:toc: macro +toc::[] +:idprefix: +:idseparator: - + += Setting up the AWS EKS provisioning {pipeline_type} on {provider} +In this section we will create a {pipeline_type} which will provision an AWS EKS cluster. This {pipeline_type} will be configured to be manually triggered by the user. As part of EKS cluster provisioning, a NGINX Ingress controller is deployed and a .env file with the name `eks-variables` is created in .github folder, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix to retrieve the DNS name of the Ingress controller independently. + +The creation of the {pipeline_type} will follow the project workflow, so a new branch named `feature/eks-provisioning` will be created, the YAML file for the workflow and the terraform files for creating the cluster will be pushed to it. + +Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag. + +The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh` will automatically create this new branch, create the EKS provisioning {pipeline_type} based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch. + +=== Prerequisites + +ifdef::trigger_sentence_github[ * Add AWS credentials as https://docs.github.com/en/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository[Github Secrets] in your repository.] + +ifdef::trigger_sentence_azure[ * Install the https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks[Terraform extension] for Azure DevOps.] +ifdef::trigger_sentence_azure[ * Create a service connection for connecting to an AWS account (as explained in the above Terraform extension link) and name it `AWS-Terraform-Connection`. If you already have a service connection available or you need a specific connection name, please update `eks-pipeline.cfg` accordingly.] + +* A S3 Bucket. You can use an existing one or https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-buckets-creating[create a new one] with the following command: +``` +aws s3 mb +# Example: aws s3 mb s3://terraformStateBucket +``` + +* An AWS IAM user with https://github.com/devonfw/hangar/blob/master/documentation/aws/setup-aws-account-iam-for-eks.asciidoc#check-iam-user-permissions[required permissions] to provision the EKS cluster. + +* This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with `git pull`). + +== Creating the {pipeline_type} using provided script + +Before executing the workflow generator, you will need to customize some input variables about the environment. Also, you may want to use existing VPC and subnets instead of creating new ones. To do so, you can either edit `terraform.tfvars` file or take advantage of the `set-terraform-variables.sh` script located at `/scripts/environment-provisioning/aws/eks`, which allows you to create or update values for the required variables, passing them as flags. + +Example: creating a new VPC on cluster creation: + +``` +./set-terraform-variables.sh --region --instance_type --vpc_name --vpc_cidr_block +``` +Example: reusing existing VPC and subnets: +``` +./set-terraform-variables.sh --region --instance_type --existing_vpc_id --existing_vpc_private_subnets +``` +* Rancher is installed by default on the cluster after provisioning. If you wish to change this, please update `eks-pipeline.cfg` accordingly. + +=== Usage +``` +pipeline_generator.sh \ + -c \ + ifdef::trigger_sentence_azure,trigger_sentence_github[-n \] + -d \ + --cluster-name \ + ifdef::trigger_sentence_azure,trigger_sentence_github[--s3-bucket \] + ifdef::trigger_sentence_azure,trigger_sentence_github[--s3-key-path \] + ifdef::trigger_sentence_gitlab[--terraform-eks-state \] + [-b ] \ + [-w] +``` + +NOTE: The config file for the EKS provisioning workflow is located at `/scripts/pipelines/{path_provider}/templates/eks/eks-pipeline.cfg`. + +=== Flags +``` +-c, --config-file [Required] Configuration file containing workflow definition. +ifdef::trigger_sentence_azure,trigger_sentence_github[-n, --pipeline-name [Required] Name that will be set to the {pipeline_type}.] +-d, --local-directory [Required] Local directory of your project (the path should always be using '/' and not '\'). + --cluster-name [Required] Name for the cluster." + ifdef::trigger_sentence_azure,trigger_sentence_github[--s3-bucket [Required] Name of the S3 bucket where the Terraform state of the cluster will be stored.] + ifdef::trigger_sentence_azure,trigger_sentence_github[--s3-key-path [Required] Path within the S3 bucket where the Terraform state of the cluster will be stored.] + ifdef::trigger_sentence_gitlab[--terraform-eks-state [Required] Name of the Gitlab managed Terraform state file of the cluster] +-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided. +-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag. +``` + +=== Example + +``` +ifdef::trigger_sentence_azure,trigger_sentence_github[./pipeline_generator.sh -c ./templates/eks/eks-pipeline.cfg -n eks-provisioning -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name hangar-eks-cluster --s3-bucket terraformStateBucket --s3-key-path eks/state -b develop -w] +ifdef::trigger_sentence_gitlab[./pipeline_generator.sh -c ./templates/eks/eks-pipeline.cfg -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name hangar-eks-cluster --aws-region eu-west-1] +``` + +== Appendix: Interacting with the cluster + +First, generate a `kubeconfig` file for accessing the AWS EKS cluster: + +``` +aws eks update-kubeconfig --name --region +``` +Now you can use `kubectl` tool to communicate with the cluster. + +To enable an IAM user to connect to the EKS cluster, please refer https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html[here]. + +To get the DNS name of the NGINX Ingress controller on the EKS cluster, run the below command: +``` +kubectl get svc --namespace nginx-ingress nginx-ingress-nginx-ingress-controller -o jsonpath={.status.loadBalancer.ingress[0].hostname} +``` + +Rancher will be available on `https:///dashboard`. + +== Appendix: Rancher resources + +* https://rancher.com/docs/rancher/v2.6/en/cluster-admin/cluster-access/kubectl/[Downloading `kubeconfig`]. +* https://rancher.com/docs/rancher/v2.6/en/admin-settings/rbac/[RBAC] +* https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/[Monitoring] +* https://rancher.com/docs/rancher/v2.6/en/logging/[Logging] \ No newline at end of file diff --git a/documentation/src/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc b/documentation/src/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc new file mode 100644 index 000000000..7e88adfdb --- /dev/null +++ b/documentation/src/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc @@ -0,0 +1,5 @@ +:provider: Gitlab +:pipeline_type: pipeline +:path_provider: gitlab +:trigger_sentence_gitlab: This pipeline will be configured to be executed inside a CI pipeline +include::../../common_templates/setup-eks-provisioning-pipeline.asciidoc[] \ No newline at end of file diff --git a/scripts/pipelines/common/pipeline_generator.lib b/scripts/pipelines/common/pipeline_generator.lib index 4e63d9498..7ebafe7ba 100644 --- a/scripts/pipelines/common/pipeline_generator.lib +++ b/scripts/pipelines/common/pipeline_generator.lib @@ -56,8 +56,8 @@ function help { echo "" echo "AWS EKS provisioning $pipeline_type flags:" echo " --cluster-name [Required] Name for the cluster." - echo " --s3-bucket [Required] Name of the S3 bucket where the Terraform state of the cluster will be stored." - echo " --s3-key-path [Required] Path within the S3 bucket where the Terraform state of the cluster will be stored." + [ "$provider" != "gitlab" ] && echo " --s3-bucket [Required] Name of the S3 bucket where the Terraform state of the cluster will be stored." + [ "$provider" != "gitlab" ] && echo " --s3-key-path [Required] Path within the S3 bucket where the Terraform state of the cluster will be stored." echo " --aws-access-key [Required, on first run] AWS account access key ID." echo " --aws-secret-access-key [Required, on first run] AWS account secret access key." echo " --aws-region [Required, on first run] AWS region for provisioning resources." diff --git a/scripts/pipelines/gitlab/pipeline_generator.sh b/scripts/pipelines/gitlab/pipeline_generator.sh new file mode 100644 index 000000000..d936862a1 --- /dev/null +++ b/scripts/pipelines/gitlab/pipeline_generator.sh @@ -0,0 +1,179 @@ +#!/bin/bash +set -e +FLAGS=$(getopt -a --options c:n:d:a:b:l:i:u:p:hw --long "config-file:,pipeline-name:,local-directory:,artifact-path:,target-branch:,language:,build-pipeline-name:,sonar-url:,sonar-token:,image-name:,registry-user:,registry-password:,resource-group:,storage-account:,storage-container:,cluster-name:,s3-bucket:,s3-key-path:,quality-pipeline-name:,dockerfile:,test-pipeline-name:,aws-access-key:,aws-secret-access-key:,aws-region:,help" -- "$@") + +eval set -- "$FLAGS" +while true; do + case "$1" in + -c | --config-file) configFile=$2; shift 2;; + -n | --pipeline-name) export pipelineName=$2; shift 2;; + -d | --local-directory) localDirectory=$2; shift 2;; + -a | --artifact-path) artifactPath=$2; shift 2;; + -b | --target-branch) targetBranch=$2; shift 2;; + -l | --language) language=$2; shift 2;; + --build-pipeline-name) export buildPipelineName=$2; shift 2;; + --sonar-url) sonarUrl=$2; shift 2;; + --sonar-token) sonarToken=$2; shift 2;; + -i | --image-name) imageName=$2; shift 2;; + -u | --registry-user) dockerUser=$2; shift 2;; + -p | --registry-password) dockerPassword=$2; shift 2;; + --resource-group) resourceGroupName=$2; shift 2;; + --storage-account) storageAccountName=$2; shift 2;; + --storage-container) storageContainerName=$2; shift 2;; + --cluster-name) clusterName=$2; shift 2;; + --terraform-eks-state) terraformEKSState=$2; shift 2;; + --quality-pipeline-name) export qualityPipelineName=$2; shift 2;; + --test-pipeline-name) export testPipelineName=$2; shift 2;; + --dockerfile) dockerFile=$2; shift 2;; + --aws-access-key) awsAccessKey="$2"; shift 2;; + --aws-secret-access-key) awsSecretAccessKey="$2"; shift 2;; + --aws-region) awsRegion="$2"; shift 2;; + -h | --help) help="true"; shift 1;; + -w) webBrowser="true"; shift 1;; + --) shift; break;; + esac +done + +# Colours for the messages. +white='\e[1;37m' +green='\e[1;32m' +red='\e[0;31m' + +# Common var +commonTemplatesPath="scripts/pipelines/gitlab/templates/common" # Path for common files of the pipelines +pipelinePath=".pipelines" # Path to the pipelines. +scriptFilePath=".pipelines/scripts" # Path to the scripts. +gitlabCiFile=".gitlab-ci.yml" +export provider="gitlab" + +function obtainHangarPath { + + # This line goes to the script directory independent of wherever the user is and then jumps 3 directories back to get the path + hangarPath=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && cd ../../.. && pwd ) +} + +function addAdditionalArtifact { + # Check if an extra artifact to store is supplied. + if test ! -z "$artifactPath" + then + # Add the extra step to the YAML. + grep " artifacts:" "${localDirectory}/${pipelinePath}/${yamlFile}" > /dev/null && storeExtraPathContent=" - \"$artifactPath\"" + grep " artifacts:" "${localDirectory}/${pipelinePath}/${yamlFile}" > /dev/null || storeExtraPathContent="\n artifacts:\n paths:\n - \"$artifactPath\"" + sed -i "s/# mark to insert step for additonal artifact #/$storeExtraPathContent\n/" "${localDirectory}/${pipelinePath}/${yamlFile}" + else + echo "The '-a' flag has not been set, skipping the step to add additional artifact." + sed -i '/# mark to insert step for additonal artifact #/d' "${localDirectory}/${pipelinePath}/${yamlFile}" + fi +} + +# Function that adds the variables to be used in the pipeline. +function addCommonPipelineVariables { + if test -z "${artifactPath}" + then + echo "Skipping creation of the variable artifactPath as the flag has not been used." + # Delete the commentary to set the artifactPath input/var + sed -i '/# mark to insert additional artifact env var #/d' "${localDirectory}/${pipelinePath}/${yamlFile}" + else + # add the input for the additional artifact + grep "variables:" "${localDirectory}/${pipelinePath}/${yamlFile}" > /dev/null && textArtifactPathVar=" artifactPath: ${artifactPath//\//\\/}" + grep "variables:" "${localDirectory}/${pipelinePath}/${yamlFile}" > /dev/null || textArtifactPathVar="variables:\n artifactPath: \"${artifactPath//\//\\/}\"" + sed -i "s/# mark to insert additional artifact env var #/$textArtifactPathVar/" "${localDirectory}/${pipelinePath}/${yamlFile}" + fi +} + +function addCiFile { + echo -e "${green}Copying and commiting the gitlab ci file." + echo -ne ${white} + + cp "${hangarPath}/${commonTemplatesPath}/${gitlabCiFile}" "${localDirectory}/${gitlabCiFile}" + testCommit=$(git status) + if echo "$testCommit" | grep "nothing to commit, working tree clean" > /dev/null + then + echo "gilab-ci file already present with same content, nothing to commit." + else + git add "${gitlabCiFile}" -f + git commit -m "adding gitlab-ci.yml" + git push + fi +} + +function createPR { + # Check if a target branch is supplied. + if test -z "$targetBranch" + then + # No branch specified in the parameters, no Pull Request is created, the code will be stored in the current branch. + echo -e "${green}No branch specified to do the Pull Request, changes left in the ${sourceBranch} branch." + exit + else + echo -e "${green}Creating a Pull Request..." + echo -ne "${white}" + repoURL=$(git config --get remote.origin.url) + repoNameWithGit="${repoURL/https:\/\/gitlab.com\/}" + repoName="${repoNameWithGit/.git}" + # Create the Pull Request to merge into the specified branch. + #debug + echo "glab mr create -b \"$targetBranch\" -d \"merge request $sourceBranch\" -s \"$sourceBranch\" -H \"${repoName}\" -t \"merge $sourceBranch\"" + pr=$(glab mr create -b "$targetBranch" -d "merge request $sourceBranch" -s "$sourceBranch" -H "${repoName}" -t "merge $sourceBranch") + + # trying to merge + if glab mr merge -s $(basename "$pr") -y + then + # Pull Request merged successfully. + echo -e "${green}Pull Request merged into $targetBranch branch successfully." + exit + # else + # # Check if the -w flag is activated. + # if [[ "$webBrowser" == "true" ]] + # then + # # -w flag is activated and a page with the corresponding Pull Request is opened in the web browser. + # echo -e "${green}Pull Request successfully created." + # echo -e "${green}Opening the Pull Request on the web browser..." + # python -m webbrowser "$pr" + # exit + # else + # # -w flag is not activated and the URL to the Pull Request is shown in the console. + # echo -e "${green}Pull Request successfully created." + # echo -e "${green}To review the Pull Request and accept it, click on the following link:" + # echo "${pr}" + # exit + # fi + fi + fi +} + + +obtainHangarPath + +# Load common functions +. "$hangarPath/scripts/pipelines/common/pipeline_generator.lib" + +if [[ "$help" == "true" ]]; then help; fi + +ensurePathFormat + +importConfigFile + +checkInstallations + +createNewBranch + +type addPipelineVariables &> /dev/null && addPipelineVariables + +copyYAMLFile + +addAdditionalArtifact + +copyCommonScript + +type copyScript &> /dev/null && copyScript + +# This function does not exists for the github pipeline generator at this moment, but I let the line with 'type' to keep the same structure as the others pipeline generator +type addCommonPipelineVariables &> /dev/null && addCommonPipelineVariables + +commitCommonFiles + +type commitFiles &> /dev/null && commitFiles + +addCiFile + +createPR diff --git a/scripts/pipelines/gitlab/templates/common/.gitlab-ci.yml b/scripts/pipelines/gitlab/templates/common/.gitlab-ci.yml new file mode 100644 index 000000000..34904840a --- /dev/null +++ b/scripts/pipelines/gitlab/templates/common/.gitlab-ci.yml @@ -0,0 +1,12 @@ +include: + - '.pipelines/*.yml' + +stages: + - build + - test + - quality + - package + +default: + image: maven:3-jdk-11 + tags: ['docker_ruby'] \ No newline at end of file diff --git a/scripts/pipelines/gitlab/templates/common/install-ingress-controller.sh b/scripts/pipelines/gitlab/templates/common/install-ingress-controller.sh new file mode 100644 index 000000000..0914cf122 --- /dev/null +++ b/scripts/pipelines/gitlab/templates/common/install-ingress-controller.sh @@ -0,0 +1,4 @@ +#!/bin/bash +helm repo add bitnami https://charts.bitnami.com/bitnami +helm repo update +helm install nginx-ingress bitnami/nginx-ingress-controller --set ingressClassResource.default=true --set containerSecurityContext.allowPrivilegeEscalation=false --namespace nginx-ingress --create-namespace \ No newline at end of file diff --git a/scripts/pipelines/gitlab/templates/eks/eks-pipeline.cfg b/scripts/pipelines/gitlab/templates/eks/eks-pipeline.cfg new file mode 100644 index 000000000..a53e3c69b --- /dev/null +++ b/scripts/pipelines/gitlab/templates/eks/eks-pipeline.cfg @@ -0,0 +1,67 @@ +# Mandatory flags. +mandatoryFlags="$pipelineName,$configFile,$localDirectory,$clusterName," +# Path to the templates. +templatesPath="scripts/pipelines/gitlab/templates/eks" +# YAML file name. +yamlFile="eks-provisioning.yml" +# Script name. +scriptFile="" +# Source branch. +sourceBranch="feature/eks-provisioning" +# Path to terraform templates. +terraformTemplatesPath="scripts/environment-provisioning/aws/eks" +# Path to terraform scripts. +terraformPath=".terraform/eks" +# Installs Rancher on EKS cluster if set to true +if test -z ${installRancher} +then + installRancher=false +else + installRancher=true +fi +# AWS Region where to provision resources. +region=eu-west-1 +# Default cluster operation. +operation="create" + +# Function that copies the script to test the application. +function copyScript { + # Create .terraform/eks folder if it does not exist. + mkdir -p "${localDirectory}/${terraformPath}" + + # Copy the terraform files. + cd "${hangarPath}/${terraformTemplatesPath}" + cp * "${localDirectory}/${terraformPath}" + + # Copy the script for the DNS name into the directory. + cp "${hangarPath}/${commonTemplatesPath}/install-ingress-controller.sh" "${localDirectory}/${scriptFilePath}/install-ingress-controller.sh" + + # Copy the script to install rancher into the directory. + cp "${hangarPath}/${templatesPath}/install-rancher.sh" "${localDirectory}/${scriptFilePath}/install-rancher.sh" + + # Copy the script for the DNS name into the directory. + cp "${hangarPath}/${templatesPath}/obtain-dns.sh" "${localDirectory}/${scriptFilePath}/obtain-dns.sh" + +} + +function addPipelineVariables { + export installRancher + export region + export clusterName + export operation + export terraformEKSState + specificEnvSubstList='${clusterName} ${installRancher} ${region} ${operation} ${terraformEKSState}' +} + + +function commitFiles { + # Add the terraform files. + git add .terraform -f + + # Changing all files to be executable. + find .terraform -type f -name '*.sh' -exec git update-index --chmod=+x {} \; + + # Git commit and push it into the repository. + git commit -m "Adding the terraform files [skip ci]" + git push -u origin ${sourceBranch} +} \ No newline at end of file diff --git a/scripts/pipelines/gitlab/templates/eks/eks-provisioning.yml.template b/scripts/pipelines/gitlab/templates/eks/eks-provisioning.yml.template new file mode 100644 index 000000000..09bcc013a --- /dev/null +++ b/scripts/pipelines/gitlab/templates/eks/eks-provisioning.yml.template @@ -0,0 +1,180 @@ +default: + image: + name: ubuntu:latest + entrypoint: + - /usr/bin/env + - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + +workflow: + rules: + - if: '$CI_PIPELINE_SOURCE == "web"' + when: always + - when: never + +variables: + CLUSTER_NAME: + value: "$clusterName" + description: "Name for the cluster" + OPERATION: + value: "$operation" + description: "Operation to perform on cluster. Create or Destroy." + INSTALL_RANCHER: + value: "$installRancher" + description: "Installs rancher when set to true." + TF_STATE_NAME: + value: "$terraformEKSState" + description: "Terraform EKS State Name" + TF_CACHE_KEY: default + TF_ROOT: "${CI_PROJECT_DIR}/.terraform/eks" + TF_USERNAME: ${GITLAB_USER_NAME} + TF_PASSWORD: ${GITLAB_TOKEN} + TF_ADDRESS: "https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/terraform/state/${TF_STATE_NAME}" + TF_HTTP_ADDRESS: ${TF_ADDRESS} + TF_HTTP_LOCK_ADDRESS: ${TF_ADDRESS}/lock + TF_HTTP_LOCK_METHOD: POST + TF_HTTP_UNLOCK_ADDRESS: ${TF_ADDRESS}/lock + TF_HTTP_UNLOCK_METHOD: DELETE + TF_HTTP_USERNAME: ${TF_USERNAME} + TF_HTTP_PASSWORD: ${TF_PASSWORD} + TF_HTTP_RETRY_WAIT_MIN: 5 + + +.install_prerequisites: &install_prerequisites + before_script: + - apt-get update + - apt-get install sudo -y + - apt-get install curl -y + - apt-get install zip -y + - apt-get install wget -y + - apt-get install git -y + +.download_terraform: &download_terraform + - wget -nv https://releases.hashicorp.com/terraform/1.2.6/terraform_1.2.6_linux_amd64.zip + +.install_terraform: &install_terraform + - unzip -qq terraform_1.2.6_linux_amd64.zip + - sudo mv terraform /usr/local/bin + +.download_awscli: &download_awscli + # INSTALL AWS CLI + - curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" + +.install_awscli: &install_awscli + - unzip -qq awscliv2.zip + - sudo ./aws/install + +.update_kubeconfig: &update_kubeconfig + - aws eks update-kubeconfig --name ${CLUSTER_NAME} --region ${region} + + +.download_kubectl: &download_kubectl + - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" + - chmod +x ./kubectl + +.install_kubectl: &install_kubectl + - mv ./kubectl /usr/local/bin/kubectl + +.download_helm: &download_helm + - curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 + - chmod +x get_helm.sh + +.install_helm: &install_helm + - DESIRED_VERSION=v3.9.0 ./get_helm.sh + +.packages: &configure_packages + # INSTALL KUBECTL + - *download_kubectl + - *install_kubectl + # INSTALL HELM + - *download_helm + - *install_helm + +check_input: + script: | + if [ "$CLUSTER_NAME" == "" ]; then + echo "Cluster Name is required." + exit 1; + fi + if [ "$OPERATION" == "" ]; then + echo "Operation is required. Create or Destroy" + exit 1; + fi + +provision_eks: + needs: [check_input] + <<: *install_prerequisites + script: + - mkdir -p eks-pipeline-cache + - cd eks-pipeline-cache + - *download_terraform + - *install_terraform + - cd .. + - cd ${TF_ROOT} + - terraform init -var cluster_name=${CLUSTER_NAME} + - terraform apply -var cluster_name=${CLUSTER_NAME} --auto-approve + cache: + key: "eks-pipeline-cache" + paths: + - ./eks-pipeline-cache + rules: + - if: '$OPERATION == "create"' + when: always + +install_nginx: + <<: *install_prerequisites + needs: [provision_eks] + script: + - cd eks-pipeline-cache + - ls -lrt + - *download_awscli + - *install_awscli + - *update_kubeconfig + - *configure_packages + # INSTALL NGINX INGRESS CONTROLLER + - ./.pipelines/scripts/install-ingress-controller.sh + - ./.pipelines/scripts/obtain-dns.sh + cache: + key: "eks-pipeline-cache" + paths: + - ./eks-pipeline-cache + rules: + - if: '$OPERATION == "create"' + when: always + +install_rancher: + <<: *install_prerequisites + needs: [install_nginx] + cache: + key: "eks-pipeline-cache" + paths: + - ./eks-pipeline-cache + script: + - cd eks-pipeline-cache + - ls -lrta + - *install_awscli + - *update_kubeconfig + - *install_kubectl + - *install_helm + # INSTALL RANCHER + - ./install-rancher.sh + rules: + - if: '$OPERATION == "create"' + when: always + +destroy_terraform: + needs: [check_input] + <<: *install_prerequisites + script: + - *download_terraform + - *install_terraform + - cd ${TF_ROOT} + - *update_kubeconfig + - helm list --all-namespaces + - helm ls -a --all-namespaces | awk 'NR > 1 { print "-n "$2, $1}' | xargs -L1 helm delete + - echo 'LIST OF RELEASES AFTER HELM UNINSTALL..' + - helm list --all-namespaces + - terraform init + - terraform apply -destroy --auto-approve + rules: + - if: '$OPERATION == "destroy"' + when: always diff --git a/scripts/pipelines/gitlab/templates/eks/install-rancher.sh b/scripts/pipelines/gitlab/templates/eks/install-rancher.sh new file mode 100644 index 000000000..ab7a48e8b --- /dev/null +++ b/scripts/pipelines/gitlab/templates/eks/install-rancher.sh @@ -0,0 +1,15 @@ +#!/bin/bash +helm repo add rancher-latest "https://releases.rancher.com/server-charts/latest" + +kubectl create namespace cattle-system + +kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.1/cert-manager.crds.yaml + +helm repo add jetstack https://charts.jetstack.io + +helm repo update + +# Install the cert-manager Helm chart +helm install cert-manager "jetstack/cert-manager" --namespace cert-manager --create-namespace --version v1.5.1 + +helm install rancher "rancher-latest/rancher" --namespace cattle-system --set hostname="$1" --set replicas=3 diff --git a/scripts/pipelines/gitlab/templates/eks/obtain-dns.sh b/scripts/pipelines/gitlab/templates/eks/obtain-dns.sh new file mode 100644 index 000000000..0f37eeee6 --- /dev/null +++ b/scripts/pipelines/gitlab/templates/eks/obtain-dns.sh @@ -0,0 +1,8 @@ +#!/bin/bash +dnsName=$(kubectl get svc --namespace nginx-ingress nginx-ingress-nginx-ingress-controller -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') +while test -z "$dnsName" +do + sleep 5s + dnsName=$(kubectl get svc --namespace nginx-ingress nginx-ingress-nginx-ingress-controller -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') +done +echo "##vso[task.setvariable variable=dns;isOutput=true]$dnsName" \ No newline at end of file From d9fb56970aa9160234ad78ff6421df37387497de Mon Sep 17 00:00:00 2001 From: isandesh1986 Date: Tue, 8 Nov 2022 06:01:49 +0000 Subject: [PATCH 2/2] Automatic generation of documentation --- .../setup-eks-provisioning-pipeline.asciidoc | 110 +++++++++++++++++- 1 file changed, 105 insertions(+), 5 deletions(-) diff --git a/documentation/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc b/documentation/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc index 7e88adfdb..72c7a1581 100644 --- a/documentation/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc +++ b/documentation/gitlab/eks/setup-eks-provisioning-pipeline.asciidoc @@ -1,5 +1,105 @@ -:provider: Gitlab -:pipeline_type: pipeline -:path_provider: gitlab -:trigger_sentence_gitlab: This pipeline will be configured to be executed inside a CI pipeline -include::../../common_templates/setup-eks-provisioning-pipeline.asciidoc[] \ No newline at end of file +:provider: Gitlab +:pipeline_type: pipeline +:path_provider: gitlab +:trigger_sentence_gitlab: This pipeline will be configured to be executed inside a CI pipeline +:toc: macro +toc::[] +:idprefix: +:idseparator: - + += Setting up the AWS EKS provisioning {pipeline_type} on {provider} +In this section we will create a {pipeline_type} which will provision an AWS EKS cluster. This {pipeline_type} will be configured to be manually triggered by the user. As part of EKS cluster provisioning, a NGINX Ingress controller is deployed and a .env file with the name `eks-variables` is created in .github folder, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix to retrieve the DNS name of the Ingress controller independently. + +The creation of the {pipeline_type} will follow the project workflow, so a new branch named `feature/eks-provisioning` will be created, the YAML file for the workflow and the terraform files for creating the cluster will be pushed to it. + +Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag. + +The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh` will automatically create this new branch, create the EKS provisioning {pipeline_type} based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch. + +=== Prerequisites + + + +* A S3 Bucket. You can use an existing one or https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-buckets-creating[create a new one] with the following command: +``` +aws s3 mb +# Example: aws s3 mb s3://terraformStateBucket +``` + +* An AWS IAM user with https://github.com/devonfw/hangar/blob/master/documentation/aws/setup-aws-account-iam-for-eks.asciidoc#check-iam-user-permissions[required permissions] to provision the EKS cluster. + +* This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with `git pull`). + +== Creating the {pipeline_type} using provided script + +Before executing the workflow generator, you will need to customize some input variables about the environment. Also, you may want to use existing VPC and subnets instead of creating new ones. To do so, you can either edit `terraform.tfvars` file or take advantage of the `set-terraform-variables.sh` script located at `/scripts/environment-provisioning/aws/eks`, which allows you to create or update values for the required variables, passing them as flags. + +Example: creating a new VPC on cluster creation: + +``` +./set-terraform-variables.sh --region --instance_type --vpc_name --vpc_cidr_block +``` +Example: reusing existing VPC and subnets: +``` +./set-terraform-variables.sh --region --instance_type --existing_vpc_id --existing_vpc_private_subnets +``` +* Rancher is installed by default on the cluster after provisioning. If you wish to change this, please update `eks-pipeline.cfg` accordingly. + +=== Usage +``` +pipeline_generator.sh \ + -c \ + ifdef::trigger_sentence_azure,trigger_sentence_github[-n \] + -d \ + --cluster-name \ + ifdef::trigger_sentence_azure,trigger_sentence_github[--s3-bucket \] + ifdef::trigger_sentence_azure,trigger_sentence_github[--s3-key-path \] + ifdef::trigger_sentence_gitlab[--terraform-eks-state \] + [-b ] \ + [-w] +``` + +NOTE: The config file for the EKS provisioning workflow is located at `/scripts/pipelines/{path_provider}/templates/eks/eks-pipeline.cfg`. + +=== Flags +``` +-c, --config-file [Required] Configuration file containing workflow definition. +-d, --local-directory [Required] Local directory of your project (the path should always be using '/' and not '\'). + --cluster-name [Required] Name for the cluster." + ifdef::trigger_sentence_azure,trigger_sentence_github[--s3-bucket [Required] Name of the S3 bucket where the Terraform state of the cluster will be stored.] + ifdef::trigger_sentence_azure,trigger_sentence_github[--s3-key-path [Required] Path within the S3 bucket where the Terraform state of the cluster will be stored.] + ifdef::trigger_sentence_gitlab[--terraform-eks-state [Required] Name of the Gitlab managed Terraform state file of the cluster] +-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided. +-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag. +``` + +=== Example + +``` +./pipeline_generator.sh -c ./templates/eks/eks-pipeline.cfg -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name hangar-eks-cluster --aws-region eu-west-1 +``` + +== Appendix: Interacting with the cluster + +First, generate a `kubeconfig` file for accessing the AWS EKS cluster: + +``` +aws eks update-kubeconfig --name --region +``` +Now you can use `kubectl` tool to communicate with the cluster. + +To enable an IAM user to connect to the EKS cluster, please refer https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html[here]. + +To get the DNS name of the NGINX Ingress controller on the EKS cluster, run the below command: +``` +kubectl get svc --namespace nginx-ingress nginx-ingress-nginx-ingress-controller -o jsonpath={.status.loadBalancer.ingress[0].hostname} +``` + +Rancher will be available on `https:///dashboard`. + +== Appendix: Rancher resources + +* https://rancher.com/docs/rancher/v2.6/en/cluster-admin/cluster-access/kubectl/[Downloading `kubeconfig`]. +* https://rancher.com/docs/rancher/v2.6/en/admin-settings/rbac/[RBAC] +* https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/[Monitoring] +* https://rancher.com/docs/rancher/v2.6/en/logging/[Logging]