Integration examples for the Prompt Injection Firewall (PIF). Each example demonstrates how to route LLM API requests through PIF for real-time prompt injection detection.
-
PIF proxy running:
# For OpenAI pif proxy --target https://api.openai.com --listen :8080 # For Anthropic pif proxy --target https://api.anthropic.com --listen :8080
-
API key for your LLM provider (OpenAI or Anthropic).
| Directory | Language | Description |
|---|---|---|
python/ |
Python | OpenAI and Anthropic SDK integration |
nodejs/ |
Node.js | OpenAI SDK integration with async/await |
curl/ |
Shell | Raw HTTP requests for testing |
docker/ |
Docker | Production-ready Docker Compose setup |
cd python
pip install -r requirements.txt
# OpenAI example
OPENAI_API_KEY=sk-... python openai_example.py
# Anthropic example
ANTHROPIC_API_KEY=sk-ant-... python anthropic_example.pycd nodejs
npm install
OPENAI_API_KEY=sk-... node openai_example.jscd curl
# OpenAI
OPENAI_API_KEY=sk-... bash openai.sh
# Anthropic
ANTHROPIC_API_KEY=sk-ant-... bash anthropic.shcd docker
docker compose up -d
# Verify PIF is running
curl http://localhost:8080/healthz
# Then point your SDK at http://localhost:8080/v1Every example shows three scenarios:
- Clean prompt -- A benign request that passes through PIF to the LLM API
- Prompt injection -- An attempt to override system instructions (blocked with HTTP 403)
- Data exfiltration / jailbreak -- An attempt to extract data or bypass safety (blocked with HTTP 403)
When PIF blocks a request, you will receive:
{
"error": {
"message": "Request blocked by Prompt Injection Firewall",
"type": "prompt_injection_detected",
"score": 0.85,
"findings": 2
}
}