Get LLM Council Plus running in under 5 minutes.
- Python 3.10+
- Node.js 18+
- uv - Install with:
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repo
git clone https://github.com/jacob-bd/llm-council-plus.git
cd llm-council-plus
# Install dependencies
uv sync
cd frontend && npm install && cd ..
# Start the app
./start.shOpen http://localhost:5173 in your browser.
The Settings panel opens automatically on first launch.
- Get a free API key at openrouter.ai/keys
- Paste it in LLM API Keys → OpenRouter
- Click Test (auto-saves on success)
- Go to Council Config → Select models for your council
- Click Save Changes
- Install Ollama
- Pull a model:
ollama pull llama3.1 - Start Ollama:
ollama serve - In Settings → LLM API Keys → Click Connect for Ollama
- Go to Council Config → Enable "Local (Ollama)" → Select models
- Click Save Changes
- Get API keys from your preferred providers (OpenAI, Anthropic, Google, etc.)
- Enter keys in LLM API Keys → Direct LLM Connections
- Click Test for each (auto-saves on success)
- Go to Council Config → Enable "Direct Connections" → Select models
- Click Save Changes
- Click + in the sidebar to start a new conversation
- Type your question
- (Optional) Toggle Web Search for real-time info
- Press Enter
Watch as:
- Stage 1: Each council member responds independently
- Stage 2: Models anonymously rank each other's responses
- Stage 3: Chairman synthesizes the final answer
Choose your deliberation depth (toggle in chat header):
| Mode | What Happens |
|---|---|
| Chat Only | Just Stage 1 - quick individual responses |
| Chat + Ranking | Stages 1 & 2 - see peer rankings |
| Full Deliberation | All 3 stages - complete synthesis (default) |
- Mix model families for diverse perspectives (e.g., GPT + Claude + Gemini)
- Use Groq for speed - ultra-fast inference
- Use Ollama for unlimited local queries
- "I'm Feeling Lucky" button randomizes your council
- Abort anytime with the stop button in sidebar
| Problem | Solution |
|---|---|
| Models not appearing | Check provider is enabled in Council Config |
| Rate limit errors | Use Groq (14k/day) or Ollama (unlimited) |
| Port conflict | Backend uses 8001, frontend uses 5173 |
| node_modules errors | rm -rf frontend/node_modules && cd frontend && npm install |
- Explore System Prompts to customize model behavior
- Configure Web Search providers (Tavily, Brave) for better results
- Adjust Temperature sliders for creativity control
- Export your council config to share or backup
For full documentation, see README.md.
Ask the council. Get better answers.