Open-source governance and compliance framework for AI agents — cross-provider policy enforcement, audit trails, and standards mapping across OWASP MCP Top 10, NIST AI RMF, and EU AI Act
-
Updated
Mar 20, 2026
Open-source governance and compliance framework for AI agents — cross-provider policy enforcement, audit trails, and standards mapping across OWASP MCP Top 10, NIST AI RMF, and EU AI Act
W3C DID Method Specification for TRAIL — Trust Registry for AI Identity Layer
ACR Control Plane: runtime control & governance for agentic AI (six-pillar enforcement).
Fraud risk scoring engine for autonomous AI agents. Detects behavioral anomalies, delegation abuse, and coordinated agent activity.
Inverse Turing test for AI agents. Procedurally generated challenges that prove substrate, autonomy, and intent — things a human can't fake. Self-hosted, open source, MIT licensed.
Universal trust layer for any AI. Trust scores, hallucination detection, cost tracking, and model comparison — works with Ollama, Claude, GPT, Gemini, anything.
learning path for AI trust.
Rust CLI tool to submit LLM prompts and receive a ZK-proof that the output was generated by a specific model.
Add a description, image, and links to the ai-trust topic page so that developers can more easily learn about it.
To associate your repository with the ai-trust topic, visit your repo's landing page and select "manage topics."