A democratic AI system where multiple LLM models work together like a parliament to reach consensus on questions and topics through discussion and voting.
- Features
- Quick Start
- Installation
- Configuration
- Usage
- How It Works
- Model Providers
- Project Structure
- Examples
- Troubleshooting
- FAQ
- Contributing
- π€ Multiple Models: Supports Ollama (local), Groq, Hugging Face, Together AI, and Cohere
- π Anonymized Voting: Models vote on anonymous opinions to prevent bias
- π³οΈ Democratic Process: Each model votes independently, majority wins
- π¬ Discussion Rounds: Models discuss and refine opinions before voting (2 rounds by default)
- π Free APIs: Uses free tiers of various AI services
- π¨ Modern GUI: Beautiful desktop application with real-time progress tracking
- π Detailed Results: View all opinions, discussions, and voting breakdowns
- πΎ Export Results: Save deliberation results to JSON for analysis
For users who want to get started immediately:
-
Install dependencies:
pip install -r requirements.txt
-
Set up API keys (at least one):
- Copy
env.exampleto.env - Add at least one API key (see API Keys Setup)
- Copy
-
Run the GUI:
python main_gui.py
Or double-click
run_council.baton Windows.
That's it! The system will automatically detect available models and start working.
- Python 3.8+ (Python 3.9 or higher recommended)
- pip (Python package manager)
- Internet connection (for API-based models)
- API keys (for at least one provider - see below)
If you have the project files, navigate to the project directory:
cd Krabbypip install -r requirements.txtNote: On some systems, you may need to use pip3 instead of pip.
Check that all packages installed correctly:
python -c "import ollama, groq, cohere; print('β All dependencies installed')"The system supports multiple AI providers. You need at least one API key to get started.
Copy the example environment file:
# On Windows (PowerShell)
Copy-Item env.example .env
# On Linux/Mac
cp env.example .envChoose one or more providers and get free API keys:
| Provider | Free Tier | Get API Key |
|---|---|---|
| Groq | β Yes (Very Fast) | Get Key |
| Hugging Face | β Yes | Get Key |
| Together AI | β Yes | Get Key |
| Cohere | β Yes | Get Key |
| Google Gemini | β Yes | Get Key |
Open .env in a text editor and add your keys:
GROQ_API_KEY=your_groq_api_key_here
HUGGINGFACE_API_KEY=your_huggingface_api_key_here
TOGETHER_API_KEY=your_together_api_key_here
COHERE_API_KEY=your_cohere_api_key_here
GOOGLE_API_KEY=your_google_api_key_hereImportant:
- Replace
your_*_api_key_herewith your actual API keys - Don't share your
.envfile or commit it to version control - You only need one API key to get started, but more models = better results
Ollama allows you to run models locally without API keys. This is completely optional but recommended for privacy and offline use.
-
Download Ollama:
- Visit: https://ollama.ai
- Download for your operating system
- Install the application
-
Verify Installation:
ollama --version
-
Start Ollama Service:
# On Windows: Usually starts automatically # On Linux/Mac: Run in terminal ollama serve
-
Download Models:
ollama pull llama3.2
ollama pull mistral
ollama pull phi3- Verify Models:
ollama list
Note: Ollama models work offline and don't require API keys, but they need sufficient RAM (4GB+ recommended per model).
You can customize the council behavior using environment variables in your .env file:
# Discussion rounds (default: 2)
COUNCIL_DISCUSSION_ROUNDS=2
# Voting mode (default: majority)
COUNCIL_VOTING_MODE=majority
# Ollama base URL (default: http://localhost:11434)
OLLAMA_BASE_URL=http://localhost:11434
# Model timeout in seconds (default: 60)
MODEL_TIMEOUT=60
# Maximum retry attempts (default: 3)
MAX_RETRIES=3The GUI provides a modern, user-friendly interface with real-time progress tracking.
python main_gui.pyOr simply double-click run_council.bat
python main_gui.pyOr use the shell script:
bash run_council.sh- π Input Panel: Enter your question or topic
- π Status Panel: See which models are available
- π Results Tabs:
- Final Output: The winning opinion
- All Opinions: See what each model initially thought
- Voting Results: Detailed vote breakdown
- πΎ Export: Save results to JSON
- ποΈ Clear: Reset and start over
For users who prefer the terminal or want to automate the process:
python main.pyThe CLI will:
- Check Ollama connection
- Check API keys
- Show available models
- Prompt for your question
- Display results
- Optionally save results to JSON
Example Session:
Enter your question or topic for the council:
What is the best approach to learn machine learning?
Step 1: Gathering initial opinions from all models...
Step 2.1: Discussion round 1...
Step 2.2: Discussion round 2...
Step 3: Models are voting...
Step 4: Counting votes...
FINAL OUTPUT (WINNING OPINION):
[The winning opinion will be displayed here]
The Council system uses a democratic process to reach consensus:
- π₯ Input Phase: All models receive the same question/topic
- π Initial Opinions: Each model independently generates its own opinion
- π Anonymization: Opinions are assigned random IDs and shuffled to prevent bias
- π¬ Discussion Rounds: Models review all anonymous opinions and discuss (default: 2 rounds)
- π³οΈ Voting Phase: Each model votes for the best opinion (without knowing who wrote it)
- π Output: The opinion with the most votes wins and becomes the final output
Why Anonymization? By hiding which model wrote which opinion, we prevent models from voting based on reputation or bias. They must evaluate opinions purely on merit.
| Provider | Type | API Key Required | Speed | Best For |
|---|---|---|---|---|
| Ollama | Local | β No | Medium | Privacy, offline use |
| Groq | Cloud | β Yes | β‘ Very Fast | Quick responses |
| Hugging Face | Cloud | β Yes | Medium | Variety of models |
| Together AI | Cloud | β Yes | Fast | High-quality models |
| Cohere | Cloud | β Yes | Fast | Business applications |
| Google Gemini | Cloud | β Yes | Fast | Google ecosystem |
Recommendation: Start with Groq (fastest) or Ollama (no API key needed).
Krabby/
βββ council/ # Main council package
β βββ __init__.py
β βββ council.py # Main council orchestration logic
β βββ models.py # Model wrappers for different providers
β βββ anonymizer.py # Opinion anonymization system
β βββ voting.py # Voting system implementation
β βββ utils/ # Utility modules
β βββ __init__.py
β βββ logging.py # Logging configuration
β βββ validation.py # Input validation
βββ config.py # Configuration and model list
βββ main.py # CLI entry point
βββ main_gui.py # GUI entry point
βββ requirements.txt # Python dependencies
βββ env.example # Environment variables template
βββ .env # Your API keys (create from env.example)
βββ run_council.bat # Windows launcher
βββ run_council.sh # Linux/Mac launcher
βββ README.md # This file
βββ WHY_ONLY_3_MODELS.md # Troubleshooting guide
Question: "What is the best approach to learn machine learning?"
Process:
- 5 models generate initial opinions
- Models discuss and refine opinions (2 rounds)
- Models vote on the best approach
- Winning opinion is returned
Question: "Should I use Python or JavaScript for a new web project?"
Process:
- Each model provides pros/cons
- Models debate the trade-offs
- Final consensus recommendation
Question: "How can I improve my productivity while working from home?"
Process:
- Diverse perspectives from different models
- Discussion leads to comprehensive solution
- Voted best practices emerge
Solutions:
-
Check API Keys:
- Verify your
.envfile exists - Ensure at least one API key is set
- Check that keys are not expired
- Verify your
-
Check Ollama (if using local models):
ollama list
If empty, pull models:
ollama pull llama3.2
-
Verify Ollama is Running:
# Test connection python -c "import ollama; print(ollama.Client().list())"
Solutions:
-
Start Ollama Service:
ollama serve
-
Check Windows Task Manager for "ollama" process
-
Verify Models are Installed:
ollama list
-
Restart the Application after starting Ollama
See WHY_ONLY_3_MODELS.md for more details.
Solutions:
- Check Internet Connection
- Verify API Key is Valid:
- Test key on provider's website
- Check for typos in
.envfile
- Check API Rate Limits:
- Free tiers have usage limits
- Wait a few minutes and try again
- Verify API Key Format:
- No extra spaces
- No quotes around the key
- Correct variable name
Solutions:
# Reinstall dependencies
pip install -r requirements.txt
# Or use pip3
pip3 install -r requirements.txtSolutions:
-
Check Python Version:
python --version # Should be 3.8+ -
Install Tkinter (usually included, but some Linux distros need it):
# Ubuntu/Debian sudo apt-get install python3-tk # Fedora sudo dnf install python3-tkinter
A: No! You only need one API key to get started. More models = better results, but one is enough.
A:
- Beginners: Start with Groq (fastest, easy setup)
- Privacy-conscious: Use Ollama (runs locally, no API key)
- Best results: Use multiple providers for diverse perspectives
A: Minimum 2 models for voting to work. Recommended 3-5 models for good consensus.
A: Yes! All providers offer free tiers. Ollama is completely free and runs locally.
A: Yes, if you use Ollama models. Cloud-based models (Groq, etc.) require internet.
A: Depends on:
- Number of models (more = longer)
- Discussion rounds (default: 2)
- Model speed (Groq is fastest)
- Typically 30 seconds to 2 minutes
A: Yes! Edit config.py or set COUNCIL_VOTING_MODE in .env.
A: Edit config.py and add model configurations to the MODELS list.
A: Yes!
- GUI: Click "Export Results" button
- CLI: Answer 'y' when prompted
A: The system automatically skips failed models and continues with available ones.
Contributions are welcome! Here are some ways to help:
- Report Bugs: Open an issue with details
- Suggest Features: Share your ideas
- Improve Documentation: Fix typos, add examples
- Add Model Providers: Extend support for new AI services
- Optimize Performance: Improve speed and efficiency
This project is open source. Feel free to use, modify, and distribute.
Start GUI:
python main_gui.pyStart CLI:
python main.pyCheck Models:
python main.py # Shows available modelsTest Ollama:
ollama listGet API Keys:
- Groq: https://console.groq.com/keys
- Hugging Face: https://huggingface.co/settings/tokens
- Together AI: https://api.together.xyz/settings/api-keys
- Cohere: https://dashboard.cohere.com/api-keys
Need Help? Check the Troubleshooting section or open an issue!