Welcome to this CKA (Certified Kubernetes Administrator) lab setup guide! This project provides an automated way to create a Kubernetes cluster using Google Cloud and Terraform.
Before you begin, ensure you have the following installed and configured:
- Google Cloud SDK
- Terraform
- A Google Cloud Platform account with appropriate permissions to create resources
- Go to the Google Cloud Console.
- In the left sidebar, navigate to APIs & Services > Library.
- In the search bar, type "Cloud Resource Manager API" and select it from the results.
- Click the Enable button.
- Repeat the same steps for the "Compute Engine API".
Ensure the following APIs are enabled in your Google Cloud project:
-
Cloud Resource Manager API:
gcloud services enable cloudresourcemanager.googleapis.com -
Compute Engine API:
gcloud services enable compute.googleapis.com
- Go to the Google Cloud Console.
- Navigate to IAM & Admin > Service Accounts.
- Click on Create Service Account.
- Provide a name and description for the service account.
- Click on Create and Continue.
- Assign the following roles to the service account:
- Compute Admin
- Editor
- Secret Manager Admin
- Click on Done.
- After creating the service account, go to the service account details and create a new key. Download the key file in JSON format and save it to a secure location.
git clone https://github.com/glyphx/CKA.git
cd CKAUpdate the variables.tf file in the root of the project and add your GCP credentials and desired configuration:
project = "your-gcp-project-id"
service_account_email = "your-service-account-email"
credentials_file = "path-to-your-gcp-credentials-file.json"
region = "your-region"
zone = "your-zone"
username = "your-username"
machine_type = "e2-small"
image = "ubuntu-os-cloud/ubuntu-2004-lts"
pod_network_cidr = "10.244.0.0/16"
control_plane_count = 1
worker_count = 2Initialize Terraform:
terraform initReview the Terraform plan:
terraform planApply the Terraform configuration:
terraform applyConfirm the action by typing yes when prompted.
After the Terraform configuration completes, your Kubernetes cluster will be ready. The master node will have kubectl configured to manage the cluster. You can SSH into the master node to start managing your cluster:
gcloud secrets versions access latest --secret="kubernetes-key" > ~/.ssh/kubernetes_key && chmod 600 ~/.ssh/kubernetes_key && ssh-keygen -y -f ~/.ssh/kubernetes_key > ~/.ssh/kubernetes_key.pub && gcloud compute ssh --zone "us-west1-a" "k8s-master" --ssh-key-file=~/.ssh/kubernetes_keyOnce logged into the master node, you can use kubectl to manage your cluster:
kubectl get nodesTo destroy the resources created by Terraform, run:
terraform destroyIf you encounter any issues, please check the logs on the master and worker nodes located in /var/log/install.log & /var/log/startup-script.log for detailed error messages.
Contributions are welcome! Please submit a pull request with your changes.
This project is licensed under the MIT License. See the LICENSE file for details.
For any questions or feedback, please open an issue on GitHub or reach out via email.
For more details and the latest updates, visit the CKA GitHub repository.

