Mix and match models from different providers in a single simulation. Use cloud APIs (OpenAI, Anthropic, Google) or run locally with Ollama.
MAIS is a web app for running turn-based, multi-actor LLM simulations with real-time streaming.
What if large language models could talk to each other? What if Gemini 3 Pro could debate GPT-5.2 on safety versus capability, or argue with Claude about whether reasoning should be cautious or bold? What if Isaac Newton could debate Friedrich Nietzsche on determinism and free will? What if Marie Curie challenged Steve Jobs on whether innovation should prioritize safety or speed? What if Nelson Mandela sat across from Niccolò Machiavelli to argue whether moral compromise is ever justified for power?
This project creates a round table where multiple AI models talk to each other—not just different personas, but different systems. Imagine ChatGPT debating Gemini on strategy, Claude critiquing both for hidden assumptions, or a local open-source model playing the skeptic.
You can configure:
- Interaction modes: Debate, Collaboration, Interaction, Custom
- Actors: persona, model, optional system prompt
- Debate: per-actor side (For/Against/Auto) + optional Moderator
- Collaboration: per-actor responsibility + optional Synthesizer/Lead
The UI renders messages as Markdown (code blocks, lists, etc.) and streams tokens live via Server-Sent Events (SSE).
backend/: FastAPI + LangChain (multi-provider LLM calls)frontend/: React + Vite + TypeScriptbackend/TECHNICAL_DOCUMENTATION.md: detailed backend API + flow
- Python 3.11+
- Node.js 18+
From repo root:
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -r backend/requirements.txt
cp backend/env.example backend/.envpython -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
pip install -r backend/requirements.txt
copy backend\env.example backend\.envConfigure API keys in backend/.env:
Edit backend/.env to add your API keys for the cloud providers you want to use. API keys are only required for OpenAI, Anthropic, and Google. Ollama runs locally without an API key.
OPENAI_API_KEYANTHROPIC_API_KEYGOOGLE_API_KEY
Run:
uvicorn app.main:app --app-dir backend --reload --host 0.0.0.0 --port 8000Backend runs at http://localhost:8000.
From repo root:
cd frontend
npm install
npm run devFrontend runs at http://localhost:5173.
The application loads available models from backend/model_catalog.json. To add a new model (e.g., a new OpenAI model or a local Ollama model), simply edit this file.
Example entry:
{
"id": "my-new-model",
"display_name": "My New Model",
"provider": "openai"
}Supported providers: openai, anthropic, google, ollama.
- Pick an interaction mode in the left panel.
- Set the topic and configure actors in the center panel.
- Click Start Simulation and watch the Live Stage stream.
- Click Stop to cancel the run.
- Click Download to save the transcript JSON.
Notes:
- The server enforces one active simulation at a time (starting a second returns
409). - If no client is listening to the SSE stream, the backend auto-stops after
ORPHAN_GRACE_SECONDS.
curl http://localhost:8000/healthzThe frontend pulls model options from the backend:
curl http://localhost:8000/api/modelsThe catalog is defined in backend/model_catalog.json (editable).
curl -X POST http://localhost:8000/api/simulations \
-H "Content-Type: application/json" \
-d '{
"topic": "Should AI systems be open source?",
"mode": "debate",
"stage": "This is a debate setting. Participants must argue for or against the topic and challenge weak assumptions.",
"turn_limit": 6,
"agents": [
{ "name": "Alice", "model": "gpt-4o-mini", "debate_side": "for" },
{ "name": "Bob", "model": "gpt-4o-mini", "debate_side": "against" }
],
"moderator": { "enabled": true, "model": "gpt-4o-mini", "frequency_turns": 2 },
"synthesizer": { "enabled": false }
}'curl -X POST http://localhost:8000/api/simulations \
-H "Content-Type: application/json" \
-d '{
"topic": "Plan a 2-day weekend trip to Kyoto",
"mode": "collaboration",
"stage": "This is a collaborative setting. Build on each other and converge on a practical plan.",
"turn_limit": 7,
"agents": [
{ "name": "Alice", "model": "gpt-4o-mini", "responsibility": "logistics + schedule" },
{ "name": "Bob", "model": "gpt-4o-mini", "responsibility": "budget + food recommendations" }
],
"moderator": { "enabled": false },
"synthesizer": { "enabled": true, "model": "gpt-4o-mini", "frequency_turns": 2 }
}'curl -N http://localhost:8000/api/simulations/<SIM_ID>/eventscurl -X POST http://localhost:8000/api/simulations/<SIM_ID>/stopcurl http://localhost:8000/api/simulations/<SIM_ID>/downloadBackend:
cd backend
pytest -qFrontend:
cd frontend
npm test- Backend API + execution flow:
backend/TECHNICAL_DOCUMENTATION.md
