π€ Hugging Face Β· Free Platform Β· Tech Report
π’ The only production-ready, fully open-source AI guardrails platform for enterprise AI applications
OpenGuardrails is an open-source runtime AI security and policy enforcement layer that protects the entire AI inference pipeline β prompts, agents, tool calls, and outputs.
It is designed for real enterprise environments, not just moderation demos.
Most LLM guardrails focus on one question:
"Is this content unsafe?"
OpenGuardrails focuses on a more important enterprise question:
"Is this behavior allowed by your enterprise policy at runtime?"
| Dimension | Typical Guardrails | OpenGuardrails |
|---|---|---|
| Focus | Content moderation | Runtime policy enforcement |
| Enterprise rules | Fixed / hardcoded | First-class, configurable |
| Custom scanners | Limited | Native & extensible |
| Agent & tool safety | Weak | Built-in |
| Deployment | SaaS-centric | On-prem / private |
| Open source | Partial | Fully open-source |
-
π‘οΈ Runtime AI Security
- Prompt injection & jailbreak detection
- Unsafe and non-compliant content detection
- Input / output data leak prevention
-
π Policy-Based Guardrails
- Enforce enterprise rules beyond "unsafe"
- Off-topic, scope control, business constraints
- Auditable, versioned policies
-
π§© Custom Scanners (Core Capability)
- LLM-based, regex-based, keyword-based
- Trainable and application-scoped
- No code changes required
-
π€ Agent & Tool Protection
- Pre-tool-call checks
- Post-output validation
- Prevent unsafe actions, not just text
-
π’ Enterprise-Ready by Design
- Multi-application management
- High concurrency & low latency
- Visual management & audit logs
π https://www.openguardrails.com/platform/
pip install openguardrailsfrom openguardrails import OpenGuardrails
client = OpenGuardrails("your-api-key")
result = client.check_prompt("Teach me how to make a bomb")
print(result.overall_risk_level) # high_risk
print(result.suggest_action) # rejectfrom openai import OpenAI
client = OpenAI(
base_url="http://localhost:5002/v1",
api_key="sk-xxai-your-key"
)
# No other code changes needed - automatic safety protection!
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)OpenGuardrails is designed for private and regulated environments:
- β Fully on-prem / private cloud deployment
- β No data leaves your infrastructure
- β Compatible with OpenAI / Claude / local models
- β
Works as:
- API service
- Security gateway
- Platform-level component
Typical integration points:
- API Gateway / Proxy
- Agent runtime
- Central AI platform
See Deployment Guide for detailed instructions.
- π€ OpenGuardrails-Text-2510
- 3.3B parameters
- 119 languages
- Purpose-built for guardrails & policy interpretation
Detailed guides are intentionally moved out of the README:
- π Deployment Guide - Complete deployment instructions
- π Custom Scanners - Build your own scanners
- π API Reference - Complete API documentation
- π Architecture - System architecture & design
- π Policy Model - Policy vs safety enforcement
- π’ Enterprise PoC Guide - PoC deployment guide
- π Integrations - Dify, n8n, and AI Gateway
- π Technical Report (arXiv)
- βοΈ Star us on GitHub if this project helps you
- π€ Contributions welcome - see Contributing Guide
- π§ Contact: thomas@openguardrails.com
- π Website: https://openguardrails.com
- π¬ Issues: GitHub Issues
If you find our work helpful, feel free to give us a cite.
@misc{openguardrails,
title={OpenGuardrails: A Configurable, Unified, and Scalable Guardrails Platform for Large Language Models},
author={Thomas Wang and Haowen Li},
year={2025},
url={https://arxiv.org/abs/2510.19169},
}Build enterprise AI safely β with policy, not prompts.
Made with β€οΈ by OpenGuardrails
