"Universal AI security framework - Protect LLM applications from prompt injection, jailbreaks, and adversarial attacks. Works with OpenAI, Anthropic, LangChain, and any LLM."
-
Updated
Mar 15, 2026 - Python
"Universal AI security framework - Protect LLM applications from prompt injection, jailbreaks, and adversarial attacks. Works with OpenAI, Anthropic, LangChain, and any LLM."
Vulntrex - View, Run & Compare Garak Scans in one sleek web interface.
Projet issu du codelab Devfest Nantes 2025 “La guerre des prompts” : atelier de 2h pour apprendre à pirater des IA et comment les protéger via des frameworks open source
Out-Of-Tree Llama Stack Eval Provider for Red Teaming LLM Systems with Garak
Ethical red teaming and penetration testing framework aligned to MITRE ATT&CK/ATLAS, OWASP, and NIST
Web-based GUI for Garak LLM security scanner. Test local Ollama models with an intuitive interface.
This repo consist of exploratory work related to AI pen testing using open source versions of garak, promptfoo
OWASP GenAI Top 10 vulnerability testing framework for LLMs — fully local, GPU-accelerated, custom guardrails
Compliance-focused vulnerability probes for NVIDIA garak, targeting LLMs in regulated industries (CMMC, NIST, HIPAA, DFARS)
🔍 Discover vulnerabilities in LLMs with garak, a tool that probes for weaknesses like hallucination, data leakage, and misinformation effectively.
Add a description, image, and links to the garak topic page so that developers can more easily learn about it.
To associate your repository with the garak topic, visit your repo's landing page and select "manage topics."