- Simulate pod eviction due to memory pressure
- Simulate pod eviction due to ephemeral storage pressure
- Evict pods manually by draining a node
- Protect critical pods using Pod Priority during eviction
- Observe eviction events and troubleshoot using
kubectl
- A Kubernetes cluster (Minikube, kind, or cloud-based)
kubectlCLI configuredstressimage for simulating resource pressure (polinux/stressorprogrium/stress)- A test namespace (e.g.,
kubectl create namespace eviction-lab)
Create a YAML file named memory-stress.yaml:
apiVersion: v1
kind: Pod
metadata:
name: memory-stress
namespace: eviction-lab
spec:
containers:
- name: memory-stress
image: polinux/stress
command: ["stress"]
args: ["--vm", "2", "--vm-bytes", "1G", "--timeout", "120s"]
resources:
requests:
memory: "500Mi"
limits:
memory: "1Gi"Create a YAML file named low-priority-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: low-priority
namespace: eviction-lab
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "sleep 600"]
resources:
requests:
memory: "100Mi"
limits:
memory: "200Mi"Deploy both pods:
kubectl apply -f memory-stress.yaml
kubectl apply -f low-priority-pod.yamlMonitor pod status:
kubectl get pods -n eviction-lab -wIf the node has limited memory, Kubernetes will evict the low-priority pod first.
Check eviction reason:
kubectl describe pod low-priority -n eviction-labCreate a YAML file named storage-stress.yaml:
apiVersion: v1
kind: Pod
metadata:
name: storage-stress
namespace: eviction-lab
spec:
containers:
- name: storage-stress
image: busybox
command: ["sh", "-c", "dd if=/dev/zero of=/data/bigfile bs=1M count=2048; sleep 300"]
volumeMounts:
- name: ephemeral-storage
mountPath: /data
resources:
requests:
ephemeral-storage: "200Mi"
limits:
ephemeral-storage: "500Mi"
volumes:
- name: ephemeral-storage
emptyDir: {}Deploy the pod:
kubectl apply -f storage-stress.yamlMonitor eviction:
kubectl get pods -n eviction-lab -wCheck node storage pressure:
kubectl describe node <node-name>You should see a DiskPressure condition if the storage fills up.
List available nodes:
kubectl get nodesThis prevents new pods from being scheduled:
kubectl cordon <node-name>This will evict all non-DaemonSet pods:
kubectl drain <node-name> --ignore-daemonsets --delete-local-dataMonitor eviction:
kubectl get pods -o wide -n eviction-labOnce testing is complete, uncordon the node:
kubectl uncordon <node-name>Create a YAML file named high-priority.yaml:
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000
globalDefault: false
description: "Priority class for critical pods."Apply the priority class:
kubectl apply -f high-priority.yamlCreate a YAML file named critical-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: critical-pod
namespace: eviction-lab
spec:
priorityClassName: high-priority
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "500Mi"
requests:
memory: "200Mi"Apply the manifest:
kubectl apply -f critical-pod.yamlRe-run the memory-stress.yaml pod from Task 1 and check which pod gets evicted first:
kubectl apply -f memory-stress.yamlThe low-priority pod should be evicted, while the critical pod remains running.
- Check events related to eviction:
kubectl get events --sort-by=.metadata.creationTimestamp -n eviction-lab- Check node conditions:
kubectl describe node <node-name>- Describe a specific pod for eviction details:
kubectl describe pod <pod-name> -n eviction-labAfter completing the lab, clean up the created resources:
kubectl delete namespace eviction-lab
kubectl delete priorityclass high-priority
kubectl uncordon <node-name> # If previously drainedThis lab demonstrated different reasons why Kubernetes evicts pods, including:
✅ Memory pressure leading to eviction of low-priority pods
✅ Ephemeral storage pressure causing evictions when disk space is low
✅ Node drain operations forcing pod eviction
✅ Pod Priority ensuring critical workloads remain unaffected during resource pressure