Skip to content

Commit bd825e4

Browse files
committed
updated workflow and added solution desc in readme of challeneg9
1 parent 2f958d2 commit bd825e4

File tree

2 files changed

+137
-0
lines changed

2 files changed

+137
-0
lines changed

.github/workflows/workflow.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,9 @@ run-name: ${{ github.actor }} is Building and pushing an image
33
on:
44
push:
55
branches: ['main']
6+
path-ignore:
7+
- 'Readme.md'
8+
- '**/readme.md'
69
jobs:
710
build:
811
runs-on: ubuntu-latest

challenge9/readme.md

Lines changed: 134 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,141 @@ Good luck! This connects the final dots in your DevOps pipeline.
3737
## Solution
3838

3939
### Deployment Update Strategy:
40+
The rolling update strategy is the ideal strategy to update the deployment with the new image, we can acheive it using both imperative command and imperative object configuration
41+
42+
1. in imperative command we use the command line to update the image of our deployment
43+
`kubectl set image deployments/my-app-deployment container=rajrishab/challeneg9:ascbasd`
44+
45+
2. in imperative object configuration we update the deployment configuration and then appy the configuration using following command
46+
`kubectl apply -f deployment.yaml`
47+
48+
49+
In both of these approaches kubernetes will update the pod one by one, meaning it will scheudled on the nodes with available resources and it will wait for the pod to get created and after that the pod is created it will remove the old pods from the cluster.
4050

4151
### GitHub Actions CD Script:
4252

53+
We can use the following CD steps to update our deployment
54+
55+
```yaml
56+
deploy:
57+
runs-on: kub-runner
58+
needs: build
59+
steps:
60+
- name: checkout
61+
uses: actions/checkout@v4
62+
- name: Download Kubectl binaries
63+
run: curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
64+
- name: Install Kubectl
65+
run: sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
66+
- name: updating config
67+
run: |
68+
IMAGE_TAG="${{ needs.build.outputs.tag }}"
69+
sed -i "s|image:.*|image: ${IMAGE_TAG}|" ./challenge9/kubernetes/deployment.yaml
70+
- name: Deploy the app to kubernetes
71+
run: |
72+
kubectl config set-cluster minikube --server=https://192.168.49.2:8443 --insecure-skip-tls-verify=true
73+
kubectl config set-credentials my-remote-access-user --token="${{ secrets.TOKEN }}"
74+
kubectl config set-context my-remote-access-context --cluster=minikube --user=my-remote-access-user --namespace=default
75+
kubectl config use-context my-remote-access-context
76+
kubectl get pods --all-namespaces
77+
kubectl config view
78+
echo "the value of TAG is ${{ needs.build.outputs.tag }}"
79+
kubectl apply -f ./challenge9/kubernetes/deployment.yaml
80+
```
81+
82+
83+
## Tag Retrieval
84+
85+
We can retrieve the image the from the previous build step using the following command
86+
87+
`IMAGE_TAG="${{ needs.build.outputs.tag }}"`
88+
89+
and then we can update the deployment config file using the following command
90+
`sed -i "s|image:.*|image: ${IMAGE_TAG}|" ./challenge9/kubernetes/deployment.yaml`
91+
92+
93+
94+
95+
## Deployment updation
96+
97+
We can prepare our clusetr for updation by following steps
98+
99+
100+
1. Install the cert-manager to manager the TLS certificates using the following command we will use it to
101+
102+
```bash
103+
helm repo add jetstack https://charts.jetstack.io
104+
helm repo update
105+
helm install cert-manager jetstack/cert-manager \
106+
--namespace cert-manager \
107+
--create-namespace \
108+
--version v1.15.1 \
109+
--set crds.enabled=true
110+
```
111+
112+
2. then generate a PAT with repo permissions so that we can register our pod as remote runner in our github
113+
114+
```bash
115+
116+
helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller
117+
helm repo update
118+
helm upgrade --install --namespace actions-runner-system --create-namespace \
119+
--set=authSecret.create=true \
120+
--set=authSecret.github_token="REPLACE_YOUR_PAT_HERE" \
121+
--wait actions-runner-controller actions-runner-controller/actions-runner-controller
122+
```
123+
124+
3. Then create a runner deployment which will act as the remote runner to run our CD pipeline
125+
126+
```yaml
127+
apiVersion: actions.summerwind.dev/v1alpha1
128+
kind: RunnerDeployment
129+
metadata:
130+
name: kubernetes-runner
131+
spec:
132+
replicas: 1
133+
template:
134+
spec:
135+
serviceAccountName: runner-sa # This ServiceAccount needs permissions
136+
repository: your_github_username/your-repository-name # Update to your target repo (e.g., siddhant-khisty/key-store-gin)
137+
labels:
138+
- "kubernetes-runner" # Label to target this runner in your workflow
139+
```
140+
141+
4. then create a service account which we will use to authenticate and authorize our remote runner to make requests/changes to our cluster using following command
142+
143+
`kubectl create sa runner-sa -n actions-runner-system`
144+
145+
146+
5. then create a ClusterRole and ClusterRoleBinding to assign the necessary permissions to the service account using following config
147+
148+
```yaml
149+
apiVersion: rbac.authorization.k8s.io/v1
150+
kind: ClusterRole
151+
metadata:
152+
name: runner-deployments
153+
rules:
154+
- apiGroups: ["apps","","clusterrole.rbac.authorization.k8s.io"]
155+
resources: ["*"]
156+
verbs: ["get","list","watch","create","update","patch","delete"]
157+
---
158+
apiVersion: rbac.authorization.k8s.io/v1
159+
kind: ClusterRoleBinding
160+
metadata:
161+
name: runner-deployments-binding
162+
subjects:
163+
- kind: ServiceAccount
164+
name: runner-sa
165+
namespace: actions-runner-system
166+
roleRef:
167+
kind: ClusterRole
168+
name: runner-deployments
169+
apiGroup: rbac.authorization.k8s.io
170+
```
171+
172+
6. Then generate the token of the service account which we will save in our github repo as secret. so that our github action can use the token to register our service account in the remote runner, when remote runner will make request to our API server it will have this TOKEN so that our API server can authenticate and authorize it.
173+
174+
`TOKEN=$(kubectl create token runner-sa --duration=8760h --namespace=actions-runner-system)`
175+
176+
7. then make the updates and push the code to repo on main branch for the CI/CD pipeline to trigger.
43177

0 commit comments

Comments
 (0)