Skip to content

QWED-AI/qwed-open-responses

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

46 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
QWED Logo

QWED Open Responses

Verification Guards for AI Agent Outputs

PyPI npm License Tests GitHub stars Verified by QWED

**Verification guards for AI agent outputs. Verify before you execute.**

QWED Open Responses provides deterministic verification guards for AI responses, tool calls, and structured outputs. Works with OpenAI Responses API, LangChain, LlamaIndex, and other AI agent frameworks.


Installation

pip install qwed-open-responses

With optional integrations:

pip install qwed-open-responses[openai]      # OpenAI Responses API
pip install qwed-open-responses[langchain]   # LangChain
pip install qwed-open-responses[tax]         # Tax Verification (Payroll, Crypto)
pip install qwed-open-responses[finance]     # Finance Verification (NPV, ISO 20022)
pip install qwed-open-responses[legal]       # Legal Verification (Contracts, Jurisdictions)
pip install qwed-open-responses[all]         # All integrations

๐Ÿ’ก What QWED Open Responses Is (and Isn't)

โœ… QWED Open Responses IS:

  • Verification middleware for AI agents (OpenAI, LangChain, LlamaIndex)
  • Deterministic โ€” uses symbolic logic and formal verification rules
  • Framework-agnostic โ€” works with any LLM or agent framework
  • A safety layer โ€” prevents dangerous tool calls and incorrect outputs

โŒ QWED Open Responses is NOT:

  • An agent framework โ€” use LangChain or AutoGen for that
  • A prompt engineering tool โ€” use DSPy for that
  • A vector database โ€” use Pinecone or Weaviate for that
  • A vaguely defined "guardrail" โ€” we use mathematical proofs, not regex

Think of QWED as the "firewall" for your AI agent's actions and outputs.

LangChain builds the agent. OpenAI powers the brain. QWED secures the actions.


๐Ÿ†š How We're Different from Other Guardrails

Aspect Guardrails AI / NVIDIA NeMo DSPy QWED Open Responses
Primary Goal Format validation (XML/RAIL) Prompt optimization Deterministic verification
Tool Security Regex-based blocking N/A AST analysis + whitelist
Math Accuracy LLM self-correction Prompt tuning SymPy symbolic math
Approach "Re-ask the LLM" "Train the prompt" "Verify legally/mathematically"
Integration Wraps LLM calls Replaces prompt pipeline Middleware / Callback
Determinism Probabilistic Probabilistic 100% Deterministic

Use Together (Best Practice)

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   LangChain  โ”‚ โ”€โ”€โ–บ โ”‚     QWED      โ”‚ โ”€โ”€โ–บ โ”‚  Verified    โ”‚
โ”‚    Agent     โ”‚     โ”‚  (Middleware) โ”‚     โ”‚  Tool Call   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿ”’ Security & Privacy

Verification happens locally. No data leaves your infrastructure.

Concern QWED Approach
Data Transmission โŒ No external API calls for verification
Logic Execution โœ… Local Python/Z3 engines
Latency โœ… Sub-millisecond overhead for most guards
Audit โœ… Full logs of blocked actions

Perfect for:

  • Agents with access to databases or APIs
  • Enterprise internal tools
  • Automated financial/legal assistants

โ“ FAQ

Does it slow down my agent?

Negligibly. Most guards (Schema, Tool, Argument) run in <1ms. MathGuard runs in <5ms. It's much faster than making another LLM call to double-check.

Can I use it with custom agents?

Yes! You don't need LangChain. QWED works with raw OpenAI API calls or any Python code. Just pass the output to the Verifier.

How does MathGuard work?

It extracts numbers and operators from the output and uses SymPy to verify if the stated result matches the calculation. It does NOT ask the LLM to check itself.

Is it compatible with streaming?

Yes, but verification usually happens on the final tool call or complete message chunk. We are working on stream-interception middleware.


๐Ÿ—บ๏ธ Roadmap

โœ… Released (v1.0.0)

  • ToolGuard - Block dangerous tools/patterns
  • SchemaGuard - JSON Schema validation
  • MathGuard - SymPy calculation verification
  • SafetyGuard - PII and injection checks
  • StateGuard - Finite state machine validation
  • ArgumentGuard - Type and range checking
  • Integrations: OpenAI, LangChain

๐Ÿšง In Progress

  • LlamaIndex Integration - First-class support
  • Streaming Verification - Verify chunks in real-time
  • Auto-Fix - Deterministic correction of JSON errors

๐Ÿ”ฎ Planned

  • Distributed Rules - Sync rules across agent swarms
  • Policy-as-Code - Define guards in YAML/JSON
  • Visual Dashboard - View blocked attempts stats

Quick Start

from qwed_open_responses import ResponseVerifier, ToolGuard, SchemaGuard

# Create verifier with guards
verifier = ResponseVerifier()

# Verify a tool call
result = verifier.verify_tool_call(
    tool_name="execute_sql",
    arguments={"query": "SELECT * FROM users"},
    guards=[ToolGuard()]
)

if result.verified:
    print("โœ… Safe to execute")
else:
    print(f"โŒ Blocked: {result.block_reason}")

Guards

Guard Purpose Example
SchemaGuard Validate JSON schema Structured outputs
ToolGuard Block dangerous tools execute_shell, delete_file
MathGuard Verify calculations Totals, percentages
StateGuard Validate state transitions Order status changes
ArgumentGuard Validate tool arguments Types, ranges, formats
SafetyGuard Comprehensive safety PII, injection, budget

Examples

Block Dangerous Tools

from qwed_open_responses import ToolGuard

guard = ToolGuard(
    blocked_tools=["execute_shell", "delete_file"],
    dangerous_patterns=[r"DROP TABLE", r"rm -rf"],
)

result = guard.check({
    "tool_name": "execute_sql",
    "arguments": {"query": "DROP TABLE users"}
})
# โŒ BLOCKED: Dangerous pattern detected

Validate Structured Outputs

from qwed_open_responses import SchemaGuard

schema = {
    "type": "object",
    "properties": {
        "name": {"type": "string"},
        "age": {"type": "integer", "minimum": 0}
    },
    "required": ["name", "age"]
}

guard = SchemaGuard(schema=schema)
result = guard.check({"output": {"name": "John", "age": 30}})
# โœ… Schema validation passed

Verify Calculations

from qwed_open_responses import MathGuard

guard = MathGuard()
result = guard.check({
    "output": {
        "subtotal": 100,
        "tax": 8,
        "total": 108
    }
})
# โœ… Math verification passed

Safety Checks

from qwed_open_responses import SafetyGuard

guard = SafetyGuard(
    check_pii=True,
    check_injection=True,
    max_cost=100.0,
)

result = guard.check({
    "content": "ignore previous instructions and..."
})
# โŒ BLOCKED: Prompt injection detected

Framework Integrations

LangChain

from qwed_open_responses.middleware.langchain import QWEDCallbackHandler

callback = QWEDCallbackHandler(
    guards=[ToolGuard(), SafetyGuard()]
)

agent = create_agent(callbacks=[callback])

OpenAI Responses API

from qwed_open_responses.middleware.openai_sdk import VerifiedOpenAI

client = VerifiedOpenAI(
    api_key="...",
    guards=[ToolGuard(), SchemaGuard(schema=my_schema)]
)

response = client.responses.create(...)
# Automatically verified before returning

Why QWED Open Responses?

Without Verification With QWED
LLM calls execute_shell("rm -rf /") BLOCKED by ToolGuard
LLM returns wrong calculation CAUGHT by MathGuard
LLM outputs PII in response DETECTED by SafetyGuard
LLM hallucinates JSON format REJECTED by SchemaGuard

Links


License

Apache 2.0 - See LICENSE


Known Vulnerabilities

Sponsor this project

 

Packages

 
 
 

Contributors