Skip to content

Council LLM uses OpenRouter to send your query to multiple LLMs, it then asks them to review and rank each other's work, and finally a Chairman LLM produces the final response

Notifications You must be signed in to change notification settings

mohcinemadkour/Council-LLM

Repository files navigation

LLM Council

llmcouncil

Live Deployment

.

In a bit more detail, here is what happens when you submit a query:

  1. Stage 1: First opinions. The user query is given to all LLMs individually, and the responses are collected. The individual responses are shown in a "tab view", so that the user can inspect them all one by one.
  2. Stage 2: Review. Each individual LLM is given the responses of the other LLMs. Under the hood, the LLM identities are anonymized so that the LLM can't play favorites when judging their outputs. The LLM is asked to rank them in accuracy and insight.
  3. Stage 3: Final response. The designated Chairman of the LLM Council takes all of the model's responses and compiles them into a single final answer that is presented to the user.

Vibe Code Alert

This project was 99% vibe coded as a fun Saturday hack because I wanted to explore and evaluate a number of LLMs side by side in the process of reading books together with LLMs. It's nice and useful to see multiple responses side by side, and also the cross-opinions of all LLMs on each other's outputs. I'm not going to support it in any way, it's provided here as is for other people's inspiration and I don't intend to improve it. Code is ephemeral now and libraries are over, ask your LLM to change it in whatever way you like.

Setup

1. Install Dependencies

The project uses uv for project management.

Backend:

uv sync

Frontend:

cd frontend
npm install
cd ..

2. Configure API Key

Create a .env file in the project root:

OPENROUTER_API_KEY=sk-or-v1-...

Get your API key at openrouter.ai. Make sure to purchase the credits you need, or sign up for automatic top up.

3. Configure Models (Optional)

Edit backend/config.py to customize the council:

COUNCIL_MODELS = [
    "openai/gpt-5.1",
    "google/gemini-3-pro-preview",
    "anthropic/claude-sonnet-4.5",
    "x-ai/grok-4",
]

CHAIRMAN_MODEL = "google/gemini-3-pro-preview"

Features

  • Multi-Model Consensus: Get responses from multiple top-tier LLMs simultaneously.
  • Peer Review: Models critique and rank each other's responses.
  • Final Synthesis: A "Chairman" model aggregates the best insights into a final answer.
  • Dashboard: Visualize statistics and a leaderboard of model performance based on peer reviews.

Documentation

Running the Application

Option 1: Use the start script

./start.sh

Option 2: Run manually

Terminal 1 (Backend):

uv run python -m backend.main

Terminal 2 (Frontend):

`cd frontend`
npm run dev

Then open http://localhost:5173 in your browser.

Tech Stack

  • Backend: FastAPI (Python 3.10+), async httpx, OpenRouter API
  • Frontend: React + Vite, react-markdown for rendering
  • Storage: JSON files in data/conversations/
  • Package Management: uv for Python, npm for JavaScript

About

Council LLM uses OpenRouter to send your query to multiple LLMs, it then asks them to review and rank each other's work, and finally a Chairman LLM produces the final response

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •