Test and defend Large Language Models against prompt injections, jailbreaks, and adversarial attacks with a web-based interactive lab.
-
Updated
Jan 28, 2026 - Python
Test and defend Large Language Models against prompt injections, jailbreaks, and adversarial attacks with a web-based interactive lab.
Official implementation of "ProxyPrompt: Securing System Prompts against Prompt Extraction Attacks"
Lightning-fast AI Firewall, integrated with leading agent frameworks
Add a description, image, and links to the prompt-defense topic page so that developers can more easily learn about it.
To associate your repository with the prompt-defense topic, visit your repo's landing page and select "manage topics."