This guide walks you through setting up PIF as a transparent proxy in front of your LLM API and integrating it with popular SDKs.
PIF sits between your application and the LLM API as a reverse proxy:
Your App ──▶ PIF Proxy (:8080) ──▶ LLM API (OpenAI / Anthropic)
│
Scans every prompt
for injection attacks
Your application sends requests to PIF instead of directly to the LLM API. PIF scans all prompts in real time and either forwards clean requests or blocks malicious ones.
go install github.com/ogulcanaydogan/Prompt-Injection-Firewall/cmd/pif-cli@latestgit clone https://github.com/ogulcanaydogan/Prompt-Injection-Firewall.git
cd Prompt-Injection-Firewall
go build -o pif ./cmd/pif-cli/
go build -o pif-firewall ./cmd/firewall/docker pull ghcr.io/ogulcanaydogan/prompt-injection-firewall:latest# For OpenAI
pif proxy --target https://api.openai.com --listen :8080
# For Anthropic
pif proxy --target https://api.anthropic.com --listen :8080Verify it is running:
curl http://localhost:8080/healthz
# {"status":"ok"}
curl http://localhost:8080/metrics
# Prometheus metrics outputfrom openai import OpenAI
client = OpenAI(
api_key="sk-...",
base_url="http://localhost:8080/v1", # Point to PIF
)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
)import anthropic
client = anthropic.Anthropic(
api_key="sk-ant-...",
base_url="http://localhost:8080", # Point to PIF
)
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=256,
messages=[{"role": "user", "content": "Hello!"}],
)const OpenAI = require("openai");
const client = new OpenAI({
apiKey: "sk-...",
baseURL: "http://localhost:8080/v1", // Point to PIF
});
const response = await client.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
});req, _ := http.NewRequest("POST", "http://localhost:8080/v1/chat/completions", body)
req.Header.Set("Authorization", "Bearer sk-...")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)curl http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer sk-..." \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4","messages":[{"role":"user","content":"Hello!"}]}'Many SDKs support a base URL environment variable:
export OPENAI_BASE_URL=http://localhost:8080/v1
# Now any OpenAI SDK call will go through PIF automaticallyWhen PIF detects an injection, it returns HTTP 403 with a JSON error body:
{
"error": {
"message": "Request blocked by Prompt Injection Firewall",
"type": "prompt_injection_detected",
"score": 0.85,
"findings": 2
}
}Make sure your application handles 403 responses gracefully.
PIF supports three response modes configured via --action:
| Action | Behavior | Use Case |
|---|---|---|
block |
Returns HTTP 403 | Production |
flag |
Forwards with X-PIF-Flagged: true header |
Staging |
log |
Forwards silently, logs detection | Development |
# Staging: flag but don't block
pif proxy --target https://api.openai.com --listen :8080 --action flag
# Development: log only
pif proxy --target https://api.openai.com --listen :8080 --action logWhen using flag mode, check the response headers:
X-PIF-Flagged: true
X-PIF-Score: 0.85Rate limiting and adaptive thresholds are enabled by default in config.yaml.
proxy:
rate_limit:
enabled: true
requests_per_minute: 120
burst: 30
key_header: "X-Forwarded-For"
detector:
adaptive_threshold:
enabled: true
min_threshold: 0.25
ewma_alpha: 0.2To enforce PIF proxy usage cluster-wide for LLM-enabled workloads:
kubectl apply -f deploy/kubernetes/namespace.yaml
kubectl apply -f deploy/kubernetes/webhook-service.yaml
kubectl apply -f deploy/kubernetes/webhook-deployment.yaml
kubectl apply -f deploy/kubernetes/webhook-certificate.yaml
kubectl apply -f deploy/kubernetes/validating-webhook-configuration.yamlThe webhook validates Pod, Deployment, StatefulSet, Job, and CronJob on CREATE/UPDATE.
- PIF proxy starts without errors
-
curl /healthzreturns{"status":"ok"} -
curl /metricsreturns Prometheus metrics - Clean prompts pass through successfully
- Known injection attempts return HTTP 403
- Your application handles 403 responses gracefully