AI safety research. Adversarial testing. Prompt injection surface analysis.
Building tools that stress-test how LLMs interact with infrastructure.
Currently poking at the trust boundary between coding agents and their hook ecosystems.
interests = ["cognitive exploits", "cryptographic architecture", "coordinated disclosure"]
PGP: coming soon