Skip to content

prattyushmangal/community-automation

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

983 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CloudPak Community Automation

Introduction

This repo represents the Community Automation effort where teams can contribute automation to be shared with other teams. We decided as a guild to use Jenkins and Ansible combination for our implementations. Jenkins and Ansible details below.

How to run playbooks

  • clone this community repo
  • decide on your run option. Docker or personal run client (eg. VM)

NOTE: the 2 options will ensure proper version of ansible. Ansible version should be 2.9 or higher.

Docker Option

A public docker image has been create with all prereq's for running playbooks.

Make sure docker or podman(RHEL 8) is installed and running on your workstation

NOTES:

  • You may need to do a docker logout before you begin.
  • default ssh keys are comming from your ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub

Tested on MAC (big sur), Ubunutu 16.04,18.04,20.04, and RHEL 7/8

MAC, Ubuntu, and RHEL 7 users

Creates a bash command line to the container ready to run playbooks

# docker run -v /<YOUR_REPO_ABSOLUTE_PATH>:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest

EXAMPLE:
# docker run -v /Users/ashworth/projects/community-automation:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest

Calling a playbook, assumes all vars are set in vars files, exits the container when complete

# docker run -v /<YOUR_REPO_ABSOLUTE_PATH>:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest -c "ansible-playbook -i <YOUR_PLAYBOOK_FOLDER>/inventory  <YOUR_PLAYBOOK_FOLDER>/playbook.yml"

EXAMPLE:
# docker run -v /Users/ashworth/projects/community-automation:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest -c "ansible-playbook -i csi-cephfs-fyre-play/inventory  csi-cephfs-fyre-play/csi-cephfs.yml"

Using param passing, a mix of vars files, and command line vars, exits the container when complete

# docker run -v /<YOUR_REPO_ABSOLUTE_PATH>:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest -c "ansible-playbook -i <YOUR_PLAYBOOK_FOLDER>/inventory  <YOUR_PLAYBOOK_FOLDER>/playbook.yml -e \"var1=value1\" -e \"var2=value1\""

EXAMPLE:
# docker run -v /Users/ashworth/projects/community-automation:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest -c "ansible-playbook -i csi-cephfs-fyre-play/inventory  csi-cephfs-fyre-play/csi-cephfs.yml" -e "var1=value1"

RHEL 8

Creates a bash command line to the container ready to run playbooks

# podman run -v /<YOUR_REPO_ABSOLUTE_PATH>:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest

EXAMPLE:
# podman run -v /Users/ashworth/projects/community-automation:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest

Calling with playbook, assumes all vars are set in vars files, exits the container when complete

# podman run -v /<YOUR_REPO_ABSOLUTE_PATH>:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest -c "ansible-playbook -i <YOUR_PLAYBOOK_FOLDER>/inventory  <YOUR_PLAYBOOK_FOLDER>/playbook.yml"

EXAMPLE:
# podman run -v /Users/ashworth/projects/community-automation:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest -c "ansible-playbook -i csi-cephfs-fyre-play/inventory  csi-cephfs-fyre-play/csi-cephfs.yml" 

Using param passing, a mix of vars files, and command line vars, exits the container when complete

# podman run -v /<YOUR_REPO_ABSOLUTE_PATH>:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest -c "ansible-playbook -i <YOUR_PLAYBOOK_FOLDER>/inventory  <YOUR_PLAYBOOK_FOLDER>/playbook.yml -e \"var1=value1\" -e \"var2=value2\"

EXAMPLE:
# podman run -v /Users/ashworth/projects/community-automation:/community-automation -v ~/.ssh:/root/.ssh -i -t quay.io/rayashworth/community-ansible:latest -c "ansible-playbook -i csi-cephfs-fyre-play/inventory  csi-cephfs-fyre-play/csi-cephfs.yml" -e \"rook_cephfs_release=v1.4.7\""

Personal install client (VM)

NOTE: This only need to be run once.

From the repo home folder "community-autommation", run the following command which will setup your person installer client with all of hte necessary prereqs to run playbooks. README

# cd community-automation
# scripts/common/install-prereqs.sh

# RHEL 8
# cd community-automation
# scripts/common/install-prereqs.sh -u YOUR_RH_USERNAME -p YOUR_RH_PASSWORD

Play list

play Description status Comments
prereq-play Install all prereq's for using this repo Availalbe none
common-services-cat-src-inst-play Install Common Services Operator Catalog Source Available none
common-service-fyre-play install csi-cephfs and common services on FYRE Available none
common-service-play deploy common-services on any infrastructure Available none
csi-cephfs-fyre-play deploy cephfs storage on your fyre cluster Available none
deploy-ova-vmware-play deploy a new RHCOS template to VMWare Available none
request-ocp-fyre-play deploy an OCP cluster on old fyre and fyre OCP+ Availalbe none
request-ocp-ceph-fyre-play deploy fyre OCP+ cluster with cephfs Availalbe none
request-ocp-cs-install-fyre-play deploy fyre OCP+ cluster and install csi-cephfs and common-services Availalbe none
request-crc-fyre-play Install Redhat CodeReadyContainer Instance Availble none
request-ocp-aws-play deploy an OCP cluster on aws WIP none
request-ocp-roks-play deploy an OCP cluster on roks Available none
request-ocp4-logging-fyre-play Install OCP logging onto OCP+ Fyre clusters Available none
request-ocp4-logging-play Install OCP logging onto OCP 4.x clusters Available none
request-ocpplus-cluster-transfer-fyre-play Transfer OCP+ Cluster Available none
request-ocs-fyre-play Install Openshift Container Storage (OCS) on OCP+ Fyre clusters Available none
request-ocs-play Install Openshift Container Storage AWS or VMware Available none
request-ocs-local-storage-vmware Install Openshift Container Storage (OCS) on VMware OCP clusters Available none
recover-machine-config-play Recover machine-config, not rolling out Available none
common-service-cat-src-inst-play Install the Common Services Catalog Source Available none
request-rhel-jmeter-fyre-play Install Jmeter on Fyre RHEL 8 Availble none
aws-route53-play Creaate DNS entries for VMWare and AWS IPI installs Availble none
provision-ocp-cluster-play deploy OCP clusters via hive instance on OCP cluster Availalbe AWS only at this time

Supporting Roles

role Description status Comments
ocp-login used when OCP Login is needed for your play Availalbe will automatically install oc client
oc-client-install installs oc command Available is automatic when using ocp-login role
ocp-cluster-tag tags your cluster Available working on AWS only at this time
aws-route53 sets up api.* and apps.* for vsphere ipi installer Availalbe
deploy-ova-vmware deploy redhat coreos image to vmware Availalbe

Scripts

Scripts can be found in ansible/scripts folder

script Description sub folder status comments
install-prereq.sh Install all prereq's on install client VM common/ Available none

Jenkins

Create your jenkins file

The "Jenkinsfile" should live in your top play folder
Here is sample ( Jenkinsfile-example ). You will update the paramList and the stage() sections. The rest is static for now.

#! groovy

// Standard job properties
def jobProps = [
  buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '50')),
  disableResume(),
  durabilityHint("PERFORMANCE_OPTIMIZED"),
  [$class: 'RebuildSettings', autoRebuild: false, rebuildDisabled: false]
]

def paramsList = [
    string(name: 'API_Server_URL', defaultValue: "api.", description: 'clustre api server url'),
    string(name: 'API_Server_Port', defaultValue: "6443", description: 'clustrer api serfver port number'),
    string(name: 'cluster_admin', defaultValue: "kubeadmin", description: 'clustrer admin user'),
    password(name: 'cluster_admin_password', description: 'cluster admin password'),
    string(name: 'ocp_client_version', defaultValue: "4.2.0"),
    string(name: 'machine_config', description: 'machine config found from oc get mc, newest version')
  ]
jobProps.push(parameters(paramsList))

properties(jobProps)

timestamps {
  ansiColor('xterm') {
    node ( 'kube_pod_slave' ) {

      checkout scm

      stage('Recover Machine Config') {
        sh """
          set +x # hide sensitive info being echo'd to log
          cp ./ansible/recover-machine-config-play/examples/mc_vars.yml ./ansible/recover-machine-config-play/;\
          cp ./ansible/recover-machine-config-play/examples/inventory ./ansible/recover-machine-config-play/;\
          ansible-playbook -i ./ansible/recover-machine-config-play/inventory \
          ./ansible/recover-machine-config-play/recover-machine-config-play.yml \
          -e kubeadmin_user=${params.cluster_admin} \
          -e kubeadmin_password=${params.cluster_admin_password} \
          -e ocp_api_url=${API_Server_URL}:${API_Server_Port} \
          -e arch="linux" \
          -e ocp_client_version=${ocp_client_version} \
          -e machine_config=${machine_config} -vv
        """.stripIndent()
      }
   }
  }
}

Setting up your jenkins job

Community Jenkins Server
Login to jenkins server
Navigate to "Cluster Ops"
Select "New Item" from left nav.
Name your job.
Select "MultiBranch Pipeline" from list.
Enter "community-pipeline-template" in the "Copy from" field.
Select "OK"

edit job settings

Select "Single repository & branch" from "Add Source" dropdown.
Enter your branch name in the "Name" field. Should be "master" most of the time.
Update Credentials, using github username and token. Working to get a FUNCTIONAL_ID.
Under "Build Configuration" update "Script Path"
Select "Save"

Ansible

Ansible Documentation

Folder Structure

The folder structure was taken from the ansible best practices document

The following is a snippet to help understand the folder structure and how we are using it.
We have plays and roles, and each are at the top level under ansible/.
Play folders end in -play, while the machine role would contain the playname without "-play"

There will be roles without a corresponding play folder, these roles would be common function that could be shared across plays

To ensure we can load the roles correctly you will notice a symbolic link to the top roles directory. This is a work around until ansible collections become available.

└── ansible
    ├── LICENCE.txt
    ├── README.md
    ├── ansible.cfg
    ├── common-service-play
    │   ├── Jenkinsfile
    │   ├── README.md
    │   ├── common-services.yml
    │   ├── examples
    │   │   ├── cs_vars.yml
    │   │   └── inventory
    │   └── roles -> ../roles
    ├── recover-expired-certificates-play
    │   ├── Jenkinsfile
    │   ├── readme.md
    │   ├── recover-expired-certificates-play.yml
    │   └── roles -> ../roles
    ├── recover-machine-config-play
    │   ├── Jenkinsfile
    │   ├── readme.md
    │   ├── recover-machine-config-play.yml
    │   └── roles -> ../roles
    ├── request-ocp-aws-play
    │   └── roles -> ../roles
    ├── request-ocp-fyre-play
    │   └── roles -> ../roles
    ├── request-crc-fyre-play
    │   └── roles -> ../roles
    ├── request-ocp-roks-play
    │   └── roles -> ../roles
    └── roles
        ├── common-services
        │   ├── README.md
        │   ├── defaults
        │   │   └── main.yml
        │   ├── tasks
        │   │   └── main.yml
        │   └── templates
        │       ├── cs-group.yaml.j2
        │       ├── cs-request.yaml.j2
        │       ├── cs-sub.yaml.j2
        │       ├── cs-validation.bash.j2
        │       └── opencloud-source.yaml.j2
        ├── ocp-login
        │   └── tasks
        │       ├── main.yml
        │       └── ocp-login.yml
        ├── recover-epxired-certificates
        │   ├── defaults
        │   │   └── main.yml
        │   ├── files
        │   ├── meta
        │   │   └── main.yml
        │   ├── tasks
        │   │   ├── main.yml
        │   │   └── recover-expired-certificates.yml
        │   ├── templates
        │   └── vars
        ├── recover-machine-config
        │   ├── defaults
        │   │   └── main.yml
        │   ├── files
        │   ├── meta
        │   │   └── main.yml
        │   ├── readme.md
        │   ├── tasks
        │   │   ├── main.yml
        │   │   └── recover-machine-config.yml
        │   ├── templates
        │   └── vars
        ├── request-ocp-aws
        │   ├── default
        │   ├── readme.md
        │   ├── tasks
        │   └── templates
        ├── request-ocp-fyre
        │   ├── defaults
        │   ├── readme.md
        │   ├── tasks
        │   └── templates
        ├── jmeter
        ├── jmeter_fyrevm
        ├── jmeter_java
        ├── java
        ├── jmeter_prereqs
        └── request-ocp-roks
            ├── defaults
            ├── readme.md
            ├── tasks
            └── templates

Common Repositories

Terraform Automation (VMWare, AWS, Google, and Azure) https://github.ibm.com/ICP-DevOps/tf_openshift_4

Some useful tools (cluster recovery scripts)
https://github.ibm.com/ICP-DevOps/tf_openshift_4_tools

ROKS Automation

ROKS info being provided until an ansible solution can be worked out in this community repo

DTEs ROKS Provisioning Tooling https://github.ibm.com/dte/roksprovisioning - See the README.md and I included an installer script to get started.

The following is used in the above repo for reference:

IBM Cloud Terraform Docker Image: https://hub.docker.com/r/ibmterraform/terraform-provider-ibm-docker/

IBM Cloud Terraform Documentation: https://cloud.ibm.com/docs/terraform?topic=terraform-getting-started

About

community-automation is meant to be a place where developers can contribute to a library of ansible playbooks related to Redhat Openshift Container, from installing OCP to post install activities. It is the intent that these playbooks and roles can be reused by any team that is looking for automation for their daily work or add to their CI/CD pi…

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Shell 58.5%
  • HTML 34.2%
  • Groovy 7.3%