This guide documents the full journey of setting up a Kubernetes cluster with Cilium and Tetragon to enforce a multi-layered Zero Trust security posture. The core security principle leveraged here is eBPF (extended Berkeley Packet Filter) for high-performance network and runtime enforcement.
This section covers the prerequisite setup for Windows/WSL2 and Docker, required to run a local Kubernetes cluster using kind.
wsl --install- Verified WSL2 installation:
wsl -l -v
- Install Docker Desktop.
- Verified Docker installation:
docker run hello-world
We load the application images and deploy the manifests to a new namespace. At this stage, all network traffic is allowed by default.
Load the application images into the zero-trust-k8s cluster:
kind load docker-image nublenews-backend:dev --name zero-trust-k8s
kind load docker-image nublenews-frontend:dev --name zero-trust-k8sThe current network posture is Default Accept.
kubectl apply -f manifests/namespace.yaml
kubectl apply -f manifests/backend.yaml
kubectl apply -f manifests/frontend.yamlCilium is the Container Network Interface (CNI) that provides high-performance networking and network policy enforcement using eBPF. Hubble provides visibility into the network flows.
helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium --version 1.18.4 `
--namespace kube-system `
--set image.pullPolicy=IfNotPresent `
--set ipam.mode=kubernetes `
--set hubble.enabled=true `
--set hubble.relay.enabled=true `
--set hubble.ui.enabled=true `
--set kubeProxyReplacement=true `
--skip-crdsWait until the node is Ready (may take a few minutes):
kubectl wait --for=condition=Ready nodes --all --timeout=300sTetragon is the component that uses eBPF to monitor application behavior at the kernel level, providing deep runtime visibility.
helm repo update
helm install tetragon cilium/tetragon --version 1.6.0 -n kube-system
kubectl rollout status -n kube-system ds/tetragon -wThe tetra CLI is used to stream events from the Tetragon agent.
- Manually download
tetra-windows-amd64.tar.gzfrom the Cilium Tetragon releases page. - Extract the
tetra.exebinary. - Move
tetra.exeto a directory on your PATH (e.g.,C:\tetragon). - Restart PowerShell to load the updated PATH.
This check confirms that the Tetragon agent is capturing kernel-level events.
- Terminal 1 (Port Forward): Set up the port bridge (using port 54500 to avoid conflicts):
kubectl port-forward tetragon-d4k46 -n kube-system 54500:54321
- Terminal 2 (Monitor): Stream events:
C:\tetragon\tetra.exe getevents -n nublenews --server-address 127.0.0.1:54500
- Terminal 3 (Executor): Generate test events:
$POD_NAME = (kubectl get pods -n nublenews -l app=frontend -o jsonpath='{.items[0].metadata.name}') kubectl exec -n nublenews $POD_NAME -- cat /etc/hosts
Verification: Events should appear in Terminal 2.
The goal is to move from Default Accept to Least Privilege by enforcing a multi-layered policy.
First, we prove that the default posture is vulnerable.
- Deploy Attacker Pod:
kubectl apply -f manifests/attacker.yaml kubectl wait --for=condition=Ready pod/attacker -n nublenews --timeout=30s
- Verify successful attack: The attacker successfully connects and exfiltrates data from the backend.
Debugging Note: The backend pod IP was found to be
10.244.0.162on TCP port3001.# Prove data can be exfiltrated kubectl exec -n nublenews attacker -- curl 10.244.0.162:3001 # Result: SUCCESS (Vulnerability Confirmed)
We apply a CiliumNetworkPolicy to restrict access to the backend service.
-
Policy (
allow-l7-ingress.yaml): This policy enforces:- L3/L4: Only the
frontendpod (matchLabels: app: frontend) can talk to thebackendon TCP port3001. - L7 (Application): The connection is further restricted to only
GET /newsHTTP requests.
# Policy Snippet apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy # ... spec: endpointSelector: matchLabels: app: backend ingress: - fromEndpoints: - matchLabels: app: frontend # <-- L3/L4 Enforcement (Source) toPorts: - ports: - port: "3001" protocol: TCP # <-- L4 Enforcement (Port/Protocol) rules: http: # <-- L7 Enforcement - method: "GET" path: "/news"
- L3/L4: Only the
-
Apply the L7 Policy:
kubectl apply -f manifests/allow-l7-ingress.yaml
-
Verify successful defense:
# Re-running the attack now FAILS because the attacker pod's label ('app: attacker') is not whitelisted. kubectl exec -n nublenews attacker -- curl 10.244.0.162:3001 # Result: FAIL (Zero Trust L3/L4 Enforcement Confirmed)
| Component | Security Layer | Status | Description |
|---|---|---|---|
| Ingress (Inbound) | L3, L4, L7 | Complete | Only the frontend can talk to the backend on TCP port 3001, and only for GET /news requests. |
| Security Visibility | Runtime/Kernel | Complete | Tetragon is running and capturing kernel-level events via tetra CLI. |
You have successfully achieved the core objectives of this project: setting up the Zero Trust platform and enforcing a multi-layered policy between critical application components.
- Check the frontend at:
http://localhost:30080(Should still work for legitimate traffic).
- Network Egress Lockdown: Apply a policy to block all outbound traffic from the
nublenewsnamespace except for necessary services (e.g., DNS to CoreDNS inkube-system). - Runtime Enforcement: Create a Tetragon
TracingPolicyto block unexpected binaries (like/bin/bashornc) from executing inside thefrontendorbackendcontainers, leveraging the kernel-level control established in Section 4.