Quorum supports any provider available through the pi-ai library. This guide covers setup for each supported provider.
The fastest way to get started:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
quorum initQuorum scans environment variables and local services, then configures providers automatically.
export OPENAI_API_KEY=sk-...Get your key at platform.openai.com/api-keys.
Config:
- name: openai
provider: openai
model: gpt-4o # or gpt-4o-mini, o3-pro, etc.
auth:
method: env
envVar: OPENAI_API_KEYexport ANTHROPIC_API_KEY=sk-ant-...Get your key at console.anthropic.com.
Config:
- name: claude
provider: anthropic
model: claude-sonnet-4-20250514
auth:
method: env
envVar: ANTHROPIC_API_KEYOAuth (Claude Code): If you use Claude Code with OAuth, Quorum can read the token from macOS Keychain:
- name: claude
provider: anthropic
model: claude-sonnet-4-20250514
auth:
method: oauth_keychain
service: com.anthropic.claude-codeexport GOOGLE_GENERATIVE_AI_API_KEY=AI...Get your key at aistudio.google.com/apikey.
Config:
- name: gemini
provider: google
model: gemini-2.0-flash
auth:
method: env
envVar: GOOGLE_GENERATIVE_AI_API_KEYGemini CLI: If you have Gemini CLI installed, Quorum can detect it automatically.
export KIMI_API_KEY=sk-...Get your key at platform.moonshot.cn.
Config:
- name: kimi
provider: kimi
model: moonshot-v1-auto
auth:
method: env
envVar: KIMI_API_KEYexport DEEPSEEK_API_KEY=sk-...Get your key at platform.deepseek.com.
Config:
- name: deepseek
provider: deepseek
model: deepseek-chat
auth:
method: env
envVar: DEEPSEEK_API_KEYexport MISTRAL_API_KEY=...Get your key at console.mistral.ai.
Config:
- name: mistral
provider: mistral
model: mistral-large-latest
auth:
method: env
envVar: MISTRAL_API_KEYexport GROQ_API_KEY=gsk_...Get your key at console.groq.com.
Config:
- name: groq
provider: openai
model: llama-3.3-70b-versatile
baseUrl: https://api.groq.com/openai/v1
auth:
method: env
envVar: GROQ_API_KEYNote: Groq uses OpenAI-compatible API, so set
provider: openaiwith a custombaseUrl.
Install from ollama.com, then:
ollama pull llama3No API key needed. Quorum auto-detects Ollama at http://localhost:11434.
Config:
- name: ollama
provider: ollama
model: llama3
auth:
method: noneRun LM Studio's local server (default: http://localhost:1234). Quorum auto-detects it.
Config:
- name: lmstudio
provider: ollama
model: your-model-name
baseUrl: http://localhost:1234
auth:
method: noneAny OpenAI-compatible API:
- name: my-provider
provider: openai
model: my-model
baseUrl: https://my-api.example.com/v1
auth:
method: env
envVar: MY_API_KEYquorum providers list # show all configured
quorum providers test # test connectivity
quorum providers models # browse available models
quorum providers add --name X --type openai --model Y --env Z
quorum providers remove XFor best deliberation quality, configure at least 3 providers from different families:
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export GOOGLE_GENERATIVE_AI_API_KEY=AI...
quorum initThis gives you Claude + GPT + Gemini — three distinct model architectures that produce genuinely diverse perspectives.