Multi-LLM collaboration CLI that harvests unique insights from each model.
Conclave orchestrates structured conversations between AI models (Claude, GPT, Gemini, Grok) to produce better results than any single model alone. Each model brings its own perspective, critiques peers, and refines its thinking across multiple rounds.
Different AI models have different strengths, biases, and blind spots. Conclave leverages this diversity:
- Claude excels at nuanced reasoning and safety considerations
- GPT brings broad knowledge and creative problem-solving
- Gemini offers strong analytical and multimodal capabilities
- Grok provides real-time knowledge and unique perspectives
By having them collaborate, you get solutions that are more robust, creative, and thoroughly vetted.
| Feature | Description |
|---|---|
| Batch Flows | Structured multi-round collaboration with file output |
| Interactive Chat | Real-time multi-LLM conversations with shared context |
| @Mentions | Target specific models in chat |
| Multiple Flow Types | Democratic (basic) or hierarchical (leading) patterns |
cd python
python -m venv venv
source venv/bin/activate # or `venv\Scripts\activate` on Windows
pip install -e .
# Set up API keys
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export GEMINI_API_KEY="..."
export XAI_API_KEY="..." # For Grok
# Run your first collaboration
conclave run basic-ideator your-idea.md
# Start an interactive chat
conclave chatcd typescript
npm install
npm run build
npm link
conclave run basic-ideator your-idea.mdHave real-time discussions with multiple AI models, all sharing context.
Turn-based: You send a message → All models respond → You send another → All respond again...
You: "What's the best approach for caching?"
↓
Claude: "Redis for distributed, LRU for single-node."
GPT: "Agree. Consider cache invalidation strategy."
Gemini: "Don't forget CDN caching for static assets."
↓
You: "@anthropic elaborate on Redis"
↓
Claude: "Redis offers pub/sub for invalidation..."
(Only Claude responds - @mention targeting)
Each model sees the full conversation history, so they can reference and build on each other's responses.
conclave chat # All active models
conclave chat -m anthropic -m openai # Specific models
conclave chat -s my_session.json # Resume session| Feature | Example |
|---|---|
| @mentions | @anthropic what do you think? - only that model responds |
| Expand responses | /expand - get detailed 500+ word answers |
| Save sessions | /save my-brainstorm - persist for later |
| Target models | /ask openai - single model response |
See the full Chat Tutorial for more.
┌─────────────────────────────────────────────────────────────┐
│ ROUND 1 │
│ Input → [Claude] [GPT] [Gemini] [Grok] → Individual plans │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ ROUND 2 │
│ Each model sees all peer outputs, critiques, and refines │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ ROUND N │
│ Convergence: Models synthesize best ideas from all │
└─────────────────────────────────────────────────────────────┘
| Flow | Type | Description |
|---|---|---|
basic-ideator |
Democratic | Round-robin: all models brainstorm, then refine based on peer feedback |
leading-ideator |
Hub-spoke | One model leads and synthesizes contributions from others |
- Basic Flow Tutorial - Security audit use case
- Leading Flow Tutorial - Architecture design use case
- Chat Tutorial - Feature brainstorming use case
# Flows
conclave list # Show available flows
conclave run <flow> <file.md> # Run a flow on input
conclave run leading-ideator input.md --leader openai
# Chat
conclave chat # Start interactive chat
conclave chat -m anthropic -m openai # Chat with specific models
# Management
conclave doctor # Check provider connectivity
conclave models # Configure AI models
conclave new-flow # Create a custom flow
conclave auth-claude # Manage Claude CLI authConclave uses conclave.config.yaml in your project directory:
active_providers:
- anthropic
- openai
- gemini
- grok
providers:
anthropic:
type: anthropic
model: claude-opus-4-5-20251101
openai:
type: openai
model: gpt-5.2
gemini:
type: gemini
model: gemini-2.0-flash
grok:
type: grok
model: grok-4
flows:
my-custom-flow:
flow_type: basic
max_rounds: 3
prompts:
round_1: "Analyze this problem..."
refinement: "Review peer feedback..."| Document | Description |
|---|---|
| Getting Started | Installation and setup |
| Chat Tutorial | Interactive chat room guide |
| Basic Flow Tutorial | Round-robin flow guide |
| Leading Flow Tutorial | Hub-and-spoke flow guide |
| Chat Transcript Example | Full example session |
.
├── python/ # Python implementation (recommended)
├── typescript/ # TypeScript/Node.js implementation
├── docs/ # Documentation and tutorials
└── README.md # This file
Set environment variables for each provider you want to use.
If you have a Claude Pro/Max subscription, Conclave can use the Claude CLI instead of API keys:
npm i -g @anthropic-ai/claude-code
claude # Authenticate once
conclave auth-claude # Verify status| Use Case | Recommended Feature |
|---|---|
| Open-ended brainstorming | conclave chat |
| Architecture design | Leading flow with --leader anthropic |
| Security audit | Basic flow (cross-validation) |
| Code review | Chat or Basic flow |
| Technical writing | Leading flow |
| Problem solving | Chat for exploration, flow for depth |
MIT