Enterprise-grade cryptographic governance framework for AI safety, compliance, and auditability
Built for the EU AI Act eraโtamper-proof audit trails, deny-by-default security, and runtime enforcement
Documentation โข Quick Start โข Features โข Roadmap โข Contributing
Lexecon is a comprehensive cryptographic governance protocol that provides:
- ๐ Cryptographically Auditable Decision-Making: Every AI action is signed, hashed, and chain-linked
- โก Runtime Policy Enforcement: Deny-by-default gating with capability-based authorization
- ๐ Compliance Automation: Built-in mappings for EU AI Act, GDPR, SOC 2, and ISO 27001
- ๐ก๏ธ Enterprise Security: RBAC, digital signatures (Ed25519/RSA-4096), audit logging
- ๐ Tamper-Evident Ledgers: Hash-chained audit trails with integrity verification
- ๐ค Model-Agnostic: Works with OpenAI, Anthropic, and open-source models
Think of it as blockchain-grade governance for AI systemsโwithout the blockchain.
Modern AI systems face critical governance challenges:
| Challenge | Impact | Regulatory Risk |
|---|---|---|
| Uncontrolled Tool Usage | Models execute arbitrary tools without oversight | High |
| No Audit Trail | Can't prove what decisions were made or why | Critical |
| Compliance Burden | Manual mapping of AI behavior to regulations | Very High |
| Policy Drift | Policies become outdated, inconsistent | Medium |
| Prompt Injection | Adversarial inputs bypass controls | High |
Lexecon provides cryptographic proof of governance:
# Before Lexecon: Hope and pray
model.call_tool("delete_production_database") # ๐ฑ
# With Lexecon: Cryptographically enforced
decision = governance.request_decision(
action="database:delete",
context={"environment": "production"}
)
# โ DENIED - Cryptographically signed audit trail createdLexicoding-forward policy system with graph-based evaluation.
Features:
- โ Declarative policy language (terms + relations)
- โ Compile-time validation and runtime evaluation
- โ Policy versioning with hash pinning
- โ Deterministic evaluation (no LLM in the loop)
Example:
from lexecon.policy import PolicyEngine, PolicyTerm, PolicyRelation
engine = PolicyEngine()
# Define terms (nodes in policy graph)
read_action = PolicyTerm.create_action("read", "Read Data")
user_actor = PolicyTerm.create_actor("user", "Standard User")
# Define relations (edges in policy graph)
engine.add_relation(PolicyRelation.permits(user_actor, read_action))
# Evaluate
result = engine.evaluate(actor="user", action="read") # โ
PermittedReal-time policy evaluation and capability token issuance.
Features:
- โ Pre-execution gating for all tool calls
- โ Context-aware policy evaluation
- โ Reason traces for explainability
- โ Capability token minting (time-limited, scoped)
Flow:
Model Request โ Decision Service โ Policy Evaluation โ Token Issuance โ Ledger Recording
Short-lived authorization tokens for approved actions.
Features:
- โ Scoped permissions (single action or resource)
- โ Time-limited validity (configurable TTL)
- โ Policy version binding
- โ Cryptographic verification
Example:
token = capability_service.mint_token(
action="database:read",
scope={"table": "users"},
ttl_seconds=300 # 5-minute validity
)
# Token: cap_a1b2c3d4_read_users_exp1704412800Tamper-evident audit log using hash chaining.
Features:
- โ Hash-chained entries (like blockchain, but faster)
- โ Ed25519 signatures on all events
- โ Integrity verification tooling
- โ Audit report generation
Properties:
- ๐ Tamper-Evident: Any modification breaks the chain
- ๐ Auditable: Complete forensic trail
- โก Fast: 10,000+ entries/second
- ๐ฆ Portable: Export to JSON/SQLite
Immutable artifact storage for compliance evidence.
Features:
- โ Append-only storage (optional)
- โ SHA-256 content hashing
- โ Digital signatures (RSA-4096)
- โ Artifact types: decisions, attestations, compliance records
Use Cases:
- ๐ EU AI Act technical documentation
- ๐ Compliance audit trails
- ๐ Signed attestations from executives
- ๐ Risk assessments
Quantitative risk assessment and tracking.
Features:
- โ Risk scoring (likelihood ร impact)
- โ Mitigation tracking
- โ Escalation workflows
- โ Risk register management
Human-in-the-loop oversight for high-risk decisions.
Features:
- โ Automatic escalation triggers
- โ Resolution workflows (approve/reject/defer)
- โ Escalation history tracking
- โ Notification integration (email, Slack, PagerDuty)
Executive override capabilities with full audit trail.
Features:
- โ Break-glass emergency procedures
- โ Executive approval workflows
- โ Override justification requirements
- โ Compliance reporting
Automatic mapping of governance primitives to regulatory controls.
Supported Frameworks:
- โ EU AI Act (Articles 9-17, 72)
- โ GDPR (Articles 5, 22, 25, 32, 35)
- โ SOC 2 (CC1-CC9, Trust Service Criteria)
- โ ISO 27001 (Controls A.5-A.18)
Example:
mapping = compliance_service.map_primitive_to_controls(
primitive_type="DECISION_LOGGING",
primitive_id="dec_12345",
framework=RegulatoryFramework.EU_AI_ACT
)
# Returns: [Article 12.1, Article 12.2, Article 16.d, Article 72]Specialized implementation of EU AI Act requirements.
Modules:
- โ Article 11: Technical documentation
- โ Article 12: Record-keeping (automatic logging)
- โ Article 14: Human oversight workflows
Enterprise security infrastructure.
Components:
- โ Authentication: RBAC with hierarchical permissions
- โ Digital Signatures: Ed25519 for audit packets, RSA-4096 for artifacts
- โ Audit Logging: Comprehensive security event tracking
- โ Middleware: FastAPI integration for request signing
Production-ready monitoring and telemetry.
Features:
- โ Structured JSON logging with context vars
- โ OpenTelemetry tracing integration
- โ Prometheus metrics export
- โ Health check endpoints
Compliance-ready audit report generation.
Features:
- โ Time-range filtering
- โ Event type filtering
- โ Multiple export formats (JSON, CSV, PDF)
- โ Cryptographic integrity proofs
Chain of custody for AI decisions.
Features:
- โ Responsibility assignment per decision
- โ Delegation workflows
- โ Accountability reporting
- โ RACI matrix support
Flexible persistence with SQLite and PostgreSQL support.
Features:
- โ SQLite for development/testing
- โ PostgreSQL for production
- โ Migration support
- โ Backup and restore utilities
Comprehensive command-line interface.
Commands:
lexecon init # Initialize configuration
lexecon policy validate # Validate policy definitions
lexecon audit verify # Verify ledger integrity
lexecon export audit # Export audit reports
lexecon doctor # System diagnosticsProduction FastAPI server with 30+ endpoints.
Endpoint Categories:
/decisions- Decision requests and history/policies- Policy management/capabilities- Token operations/ledger- Audit trail queries/evidence- Artifact management/escalations- Human oversight/overrides- Executive actions/compliance- Regulatory reporting
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Lexecon Protocol Stack โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ API Layer (FastAPI) โ
โ REST Endpoints โ OpenAPI Docs โ Request Validation โ Rate Limiting โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ญ Governance Core โ
โ โโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Policy Engine โ Decision Service โ Capability System โ โ
โ โ โข Graph Eval โ โข Gating โ โข Token Minting โ โ
โ โ โข Constraints โ โข Reason Traces โ โข Verification โ โ
โ โโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ Cryptographic Services โ
โ โโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Ledger (Hashing) โ Identity (Keys) โ Signatures (Ed25519) โ โ
โ โ โข Hash Chains โ โข Ed25519 Keys โ โข Packet Signing โ โ
โ โ โข Integrity โ โข Key Storage โ โข Verification โ โ
โ โโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ Compliance & Risk โ
โ โโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ EU AI Act โ Compliance Map โ Risk Management โ โ
โ โ โข Art. 11-14 โ โข SOC 2 / GDPR โ โข Scoring โ โ
โ โ โข Documentation โ โข ISO 27001 โ โข Mitigation โ โ
โ โโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐จ Oversight & Controls โ
โ โโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Escalations โ Overrides โ Responsibility โ โ
โ โ โข Human Review โ โข Break-glass โ โข Accountability โ โ
โ โ โข Workflows โ โข Justification โ โข Chain of Custody โ โ
โ โโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ฆ Evidence & Audit โ
โ โโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Evidence Store โ Audit Export โ Verification Tools โ โ
โ โ โข Artifacts โ โข Reports โ โข Integrity Checks โ โ
โ โ โข Signatures โ โข Time-range โ โข Hash Validation โ โ
โ โโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ Observability โ
โ Logging (Structured) โ Tracing (OpenTelemetry) โ Metrics (Prometheus)โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐พ Storage Layer โ
โ SQLite (Dev) โ PostgreSQL (Prod) โ Migrations โ Backups โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
- Python 3.8+
- pip or Poetry
# From PyPI (when published)
pip install lexecon
# From source
git clone https://github.com/Lexicoding-systems/Lexecon.git
cd Lexecon
pip install -e ".[dev]"
# Verify installation
lexecon --version
lexecon doctordocker pull lexecon/lexecon:latest
docker run -p 8000:8000 lexecon/lexecon:latestlexecon init
# Creates: ~/.lexecon/config.yaml, keys/, policies/lexecon serve
# Server running at: http://localhost:8000
# API docs: http://localhost:8000/docsimport requests
response = requests.post("http://localhost:8000/decisions/request", json={
"actor": "act_human_user:alice",
"action": "database:read",
"resource": "users_table",
"context": {
"environment": "production",
"purpose": "analytics"
}
})
decision = response.json()
print(f"Decision: {decision['outcome']}") # "allowed" or "denied"
print(f"Reason: {decision['reason']}")
print(f"Token: {decision.get('capability_token')}")lexecon audit verify
# โ
Ledger integrity verified
# โ
1,234 entries checked
# โ
Chain intact from genesis to headfrom lexecon.policy import PolicyEngine, PolicyTerm, PolicyRelation, RelationType
engine = PolicyEngine()
# Define actors
admin = PolicyTerm.create_actor("admin", "Administrator")
user = PolicyTerm.create_actor("user", "Standard User")
# Define actions
read = PolicyTerm.create_action("read", "Read data")
write = PolicyTerm.create_action("write", "Write data")
delete = PolicyTerm.create_action("delete", "Delete data")
# Define relations
engine.add_relation(PolicyRelation.permits(admin, read))
engine.add_relation(PolicyRelation.permits(admin, write))
engine.add_relation(PolicyRelation.permits(admin, delete))
engine.add_relation(PolicyRelation.permits(user, read))
engine.add_relation(PolicyRelation.forbids(user, delete))
# Evaluate
result = engine.evaluate(actor="user", action="delete")
print(result.outcome) # "denied"from lexecon.compliance_mapping import ComplianceMappingService, RegulatoryFramework
service = ComplianceMappingService()
# Map a decision to EU AI Act articles
mapping = service.map_primitive_to_controls(
primitive_type="DECISION_LOGGING",
primitive_id="dec_12345",
framework=RegulatoryFramework.EU_AI_ACT
)
print(f"Mapped to {len(mapping.control_ids)} controls:")
for control_id in mapping.control_ids:
print(f" - {control_id}")
# Generate compliance report
report = service.generate_compliance_report(RegulatoryFramework.SOC2)
print(f"Compliance: {report.compliance_percentage:.1f}%")from lexecon.risk import RiskService, RiskLevel
risk_service = RiskService()
# Create risk assessment
risk = risk_service.create_risk(
title="Unauthorized data access",
description="User attempting to access PII without proper authorization",
category="data_privacy",
likelihood=0.3,
impact=0.9,
affected_systems=["user_database", "audit_log"]
)
print(f"Risk ID: {risk.risk_id}")
print(f"Risk Score: {risk.risk_score:.2f}")
print(f"Risk Level: {risk.risk_level}") # HIGH
# Add mitigation
risk_service.add_mitigation(
risk_id=risk.risk_id,
action="Implement additional RBAC checks",
responsible_party="security_team"
)from lexecon.evidence import EvidenceService, ArtifactType
evidence_service = EvidenceService()
# Store compliance evidence
artifact = evidence_service.store_artifact(
artifact_type=ArtifactType.ATTESTATION,
content="We certify that all AI decisions are logged and auditable",
source="cto@company.com",
metadata={
"regulation": "EU AI Act Article 12",
"period": "2024-Q1"
}
)
# Sign artifact (RSA-4096)
signed = evidence_service.sign_artifact(
artifact_id=artifact.artifact_id,
signer_id="act_human_user:cto",
signature="...",
algorithm="RSA-SHA256"
)
print(f"Artifact ID: {artifact.artifact_id}")
print(f"SHA256 Hash: {artifact.sha256_hash}")pytest --cov=src/lexecon --cov-report=html
# 824 tests passing
# 69% coverage (targeting 80%+)- โ
observability/logging.py - โ
observability/metrics.py - โ
observability/health.py - โ
evidence/append_only_store.py - โ
compliance_mapping/service.py - โ
policy/terms.py - โ
ledger/chain.py - โ
identity/signing.py - โ
capability/tokens.py
| Metric | Status | Target |
|---|---|---|
| Test Coverage | 69% | 80%+ |
| Tests Passing | 824 | All |
| Type Coverage | 85% | 90%+ |
| Linting | โ Black + Ruff | Clean |
| Security Scan | โ CodeQL | No High |
- โ Policy engine with graph evaluation
- โ Decision service with capability tokens
- โ Cryptographic ledger with hash chaining
- โ Evidence management system
- โ Basic compliance mapping (EU AI Act, GDPR, SOC 2)
- โ Risk management and scoring
- โ Escalation workflows
- โ Override management
- โ Responsibility tracking
- โ Security services (RBAC, signing, audit)
- โ REST API (30+ endpoints)
- โ CLI tooling
- โ EU AI Act Articles 11, 12, 14
- โ Compliance mapping automation
- ๐ง Automated compliance reporting
- ๐ง Real-time compliance dashboards
- ๐ง Export to regulatory formats (ESEF, XBRL)
- ๐ PostgreSQL production backend
- ๐ Horizontal scaling support
- ๐ High-availability deployments
- ๐ Kubernetes operators
- ๐ Terraform modules
- ๐ Performance benchmarking (10K+ req/s)
- ๐ LangChain integration
- ๐ OpenAI function calling adapters
- ๐ Anthropic tool use integration
- ๐ Prompt injection detection
- ๐ Model behavior analysis
- ๐ฎ Federated governance (multi-org)
- ๐ฎ Zero-knowledge proofs for privacy
- ๐ฎ Blockchain anchoring (optional)
- ๐ฎ AI-generated policy suggestions
- ๐ฎ Automated red-teaming
- ๐ฎ Compliance prediction (ML-based)
- Policy Terms: Nodes in the policy graph (actors, actions, resources, data classes)
- Policy Relations: Edges defining permissions (permits, forbids, requires, implies)
- Governance Primitives: Core operations (decisions, escalations, overrides, evidence)
- Capability Tokens: Short-lived authorization tokens for approved actions
- Hash Chaining: Tamper-evident linking of audit entries
- Digital Signatures: Ed25519 for speed, RSA-4096 for compliance
Full API documentation available at /docs when server is running:
lexecon serve
# Visit: http://localhost:8000/docslexecon --help # Show all commands
lexecon policy --help # Policy management
lexecon audit --help # Audit operations
lexecon export --help # Export utilitiesWe welcome contributions! Please see CONTRIBUTING.md for guidelines.
# Clone repository
git clone https://github.com/Lexicoding-systems/Lexecon.git
cd Lexecon
# Install with development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run linters
black src/ tests/
ruff check src/ tests/
# Run type checker
mypy src/- ๐งช Test coverage (target: 80%+)
- ๐ Documentation and examples
- ๐ Additional compliance frameworks
- ๐ Model integrations (LangChain, LlamaIndex)
- ๐ Performance optimizations
- ๐ Bug fixes and improvements
Please report security issues to: [Jacobporter@lexicoding.tech]
Do not open public issues for security vulnerabilities.
- โ Ed25519 cryptographic signatures (tamper-proof)
- โ Hash-chained audit logs (immutable)
- โ RBAC with hierarchical permissions
- โ Time-limited capability tokens
- โ Request signing middleware
- โ Audit log integrity verification
- โ Input validation and sanitization
Lexecon is released under the MIT License.
Copyright (c) 2024 Lexicoding Systems
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
| Feature | Lexecon | Traditional Approaches |
|---|---|---|
| Audit Trail | Cryptographically tamper-proof | Mutable logs, easy to alter |
| Policy Enforcement | Runtime gating, deny-by-default | Post-hoc analysis, hope-based |
| Compliance | Automated mapping, real-time | Manual processes, expensive |
| Transparency | Every decision explained | Black-box decisions |
| Security | Ed25519 signatures, hash chains | Often none |
| Scalability | 10K+ req/s (target) | Varies |
- Documentation: https://lexecon.readthedocs.io (coming soon)
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: [Jacobporter@lexicoding.tech] (mailto:Jacobporter@lexicoding.tech)
Built with:
- FastAPI - Modern web framework
- Pydantic - Data validation
- Cryptography - Ed25519 and RSA implementations
- SQLAlchemy - Database ORM
- pytest - Testing framework
Inspired by:
- EU AI Act requirements
- NIST AI Risk Management Framework
- OpenAI's safety practices
- Anthropic's Constitutional AI