Quorum is a multi-AI deliberation framework. It orchestrates multiple LLM providers through a structured debate process to produce higher-quality, more reliable answers than any single model.
The deliberation engine (council-v2.ts) runs the following phases:
Each provider generates an independent response in isolation. No provider sees another's output. This ensures diverse initial perspectives.
Each provider now sees all other providers' initial responses and plans their argument strategy. This is where providers identify points of agreement and disagreement.
Each provider writes a formal position statement based on their plan.
Room-style debate β every provider critiques ALL other positions simultaneously (not round-robin pairs). This creates swarm dynamics where weak arguments are challenged from multiple angles.
Each provider revises their position based on all critiques received. Providers can strengthen, modify, or abandon positions.
Final rebuttals or concessions. Auto-skipped if consensus has already been reached (measured by disagreement entropy).
Each provider ranks all positions. Votes are tallied using the configured method:
- Borda count (default) β points by rank position
- Ranked-choice β instant-runoff elimination
- Approval β binary approve/reject
- Condorcet β pairwise comparison winner
The runner-up (not the winner, to reduce confirmation bias) synthesizes the best thinking into a definitive answer including:
- Main answer with merged insights
- Minority report (dissenting views)
- "What Would Change My Mind" section
- Per-provider contribution attribution
Profiles can specify custom phase pipelines:
# Rapid mode (3 phases)
phases: [gather, debate, synthesize]
# Skip planning
phases: [gather, formulate, debate, adjust, vote, synthesize]All providers route through pi-ai for unified API access. The ProviderAdapter interface requires only:
interface ProviderAdapter {
name: string;
generate(prompt: string, systemPrompt?: string): Promise<string>;
generateStream?(prompt: string, systemPrompt: string | undefined, onDelta: (delta: string) => void): Promise<string>;
}This makes adding new providers trivial β any OpenAI-compatible API works out of the box.
The adaptive system (adaptive.ts) dynamically adjusts deliberation based on disagreement entropy:
- Low entropy after gather β providers already agree β skip to vote
- High entropy after debate β add extra debate rounds
- Uses multi-armed bandit learning to optimize skip/extend decisions over time
Presets: fast (aggressive skipping), balanced (default), critical (never skips).
The topology engine (topology.ts) supports 7 debate structures:
| Topology | Description |
|---|---|
| Mesh | Every provider debates every other (default) |
| Star | Hub provider debates all others; spokes only see hub |
| Tournament | Bracket-style elimination |
| Map-Reduce | Split question into sub-questions, merge results |
| Adversarial Tree | Binary tree of challenger pairs |
| Pipeline | Sequential: each builds on the previous |
| Panel | Moderator-guided discussion |
When enabled (--evidence strict), providers must tag claims with sources:
- Claims are parsed and scored by quality tier: A (URL) β B (file) β C (data) β D (reasoning) β F (unsupported)
- Cross-provider validation detects corroborated and contradicted claims
- In strict mode, unsupported claims are penalized in voting
Sessions are stored as JSON files in ~/.quorum/sessions/. Each phase's output is saved incrementally via SessionStore, enabling:
- Resume after interruption
- Post-hoc analysis
- Deterministic replay
Every deliberation is recorded in a SHA-256 hash-chained ledger (~/.quorum/ledger.json). Each entry chains to the previous via cryptographic hash, creating a tamper-evident audit trail.
The memory graph (memory-graph.ts) enables cross-run retrieval:
- Stores key insights from each deliberation
- Keyword-based retrieval for relevant prior context
- Contradiction detection across sessions
YAML-based policy rules (policy.ts) evaluate pre- and post-deliberation:
blockβ prevent deliberation from proceedingwarnβ show warning but continuelogβ silent loggingpauseβ require human confirmation
HITL checkpoints (hitl.ts) pause deliberation at configurable phases. During pause:
- Inject guidance or additional context
- Override vote results
- Resume with modifications
- Auto-triggers on high-controversy scenarios
The MCP server (mcp.ts) exposes Quorum as tools for AI agents via stdio-based Model Context Protocol, enabling Claude Desktop, Cursor, and other MCP clients to invoke deliberations.
User Input
β
CLI (commander.js) β parse flags, load config/profile
β
Council V2 Engine
β
βββββββββββββββββββββββββββββββββββββββββββ
β For each phase: β
β 1. Build prompts (with context budget) β
β 2. Run providers in parallel β
β 3. Collect & store responses β
β 4. Run adaptive check (skip/extend?) β
β 5. Run policy check (block/warn?) β
β 6. HITL checkpoint (pause?) β
β 7. Run hooks (pre/post scripts) β
βββββββββββββββββββββββββββββββββββββββββββ
β
Synthesis β Voting β Final Output
β
Session saved β Ledger entry β Memory stored
The context manager (context.ts) handles token budgets:
- Estimates token counts for prompts
- Fits debate history within model context windows
- Prioritizes recent and high-relevance content when truncating