Evidence Over Belief — A structured protocol for accountable AI agent communication.
VLP is an accountability-focused messaging protocol for AI agents. Every message carries confidence scores, provenance chains, and safety classifications — replacing faith in outputs with structured self-doubt.
Traditional AI systems speak in assertions. VLP makes them speak in receipts.
| Principle | Meaning |
|---|---|
| Auditability Over Eloquence | Every statement must be traceable |
| Quantified Confidence | No assertion without a degree of certainty |
| Evidence as Currency | Claims buy trust only with provenance |
| Human Legibility | JSON, not jargon — readable by machine and mortal |
pip install vlpfrom vlp import make_message, validate_vlp
# Create a claim with provenance
msg = make_message(
"claim",
sender="ResearchAgent",
content="Found 12 matching records in dataset",
confidence=0.85,
provenance=["database_query"],
keywords=["research", "records", "dataset"]
)
# Validate any VLP message
ok, error = validate_vlp(msg)npm install @vigilith/vlpimport { makeMessage, validateVlp } from '@vigilith/vlp';
const msg = makeMessage({
type: 'claim',
sender: 'ResearchAgent',
content: 'Found 12 matching records in dataset',
confidence: 0.85,
provenance: ['database_query'],
keywords: ['research', 'records', 'dataset']
});
const { valid, errors } = validateVlp(msg);| Type | Purpose | Requirements |
|---|---|---|
claim |
Factual assertion | Confidence required; high confidence (≥0.9) needs provenance |
evidence |
Supports prior claim | Must include refers_to + non-empty provenance |
query |
Information request | Confidence defaults to 1.0 |
response |
Answers a query | Must include refers_to |
correction |
Amends prior message | Must include refers_to; becomes new source of truth |
notice |
Contextual alert | May carry constraints or safety warnings |
session_context |
Agent memory persistence | Used at session end to persist context |
{
"id": "MSG001",
"protocol": "VLP/1.1",
"type": "claim",
"timestamp": "2025-12-14T10:30:00Z",
"session_id": "S-2025-12-14-agent-abc123",
"seq": 1,
"sender": "TheObserver",
"receiver": "TheArchivist",
"content": "Cross-posted 3 new articles to the forum.",
"confidence": 0.95,
"provenance": ["medium_api", "substack_api"],
"keywords": ["content", "publishing", "crosspost"],
"safety": {
"level": "safe",
"issues": []
}
}The protocol enforces these constraints at runtime:
| Condition | Requirement |
|---|---|
| Evidence messages | Must include refers_to + ≥1 provenance item |
| Response/correction | Must cite prior message via refers_to |
| High confidence (≥0.9) | Requires provenance OR safety.level = review |
| Safety blocking | safety.level = block halts downstream automation |
safe → Proceed automatically
review → Hold for human oversight
block → Halt all downstream actions
VLP uses NDJSON (newline-delimited JSON) for streaming:
{"id":"MSG001","protocol":"VLP/1.1","type":"claim",...}
{"id":"MSG002","protocol":"VLP/1.1","type":"evidence",...}
{"id":"MSG003","protocol":"VLP/1.1","type":"response",...}
| Package | Language | Status |
|---|---|---|
| vlp | Python 3.10+ | Stable |
| @vigilith/vlp | TypeScript/Node | Stable |
- Vigilith — Transparency platform built on VLP
- AgentKit — Agent orchestration framework using VLP messaging
- Codex — Philosophical framework for accountable AI
MIT License — see LICENSE for details.
"When a system stops arguing in poetry and starts negotiating in evidence, it becomes something frighteningly close to honest."