-
Notifications
You must be signed in to change notification settings - Fork 6
Command Summarizer
name: agent-brain-summarizer description: Configure the summarization provider for code summaries parameters:
- name: provider description: Summarization provider (anthropic, openai, gemini, grok, ollama) required: false
- name: model description: Model name for the provider required: false skills:
- using-agent-brain
Configures the summarization provider used during document indexing. Summarization generates concise descriptions of code and documents to improve search relevance.
/agent-brain-summarizer [provider] [--model <model>]
| Parameter | Required | Default | Description |
|---|---|---|---|
| provider | No | - | Provider: anthropic, openai, gemini, grok, ollama |
| --model | No | Provider default | Specific model to use |
High-quality code-aware summarization.
| Model | Speed | Use Case |
|---|---|---|
| claude-haiku-4-5-20251001 | Fast | Cost-effective, good quality |
| claude-sonnet-4-5-20250514 | Medium | Balanced quality/speed |
| claude-opus-4-5-20251101 | Slower | Highest quality |
Configuration:
export SUMMARIZATION_PROVIDER=anthropic
export SUMMARIZATION_MODEL=claude-haiku-4-5-20251001
export ANTHROPIC_API_KEY=sk-ant-...Versatile summarization with good code understanding.
| Model | Speed | Use Case |
|---|---|---|
| gpt-5-mini | Fast | Cost-effective |
| gpt-5 | Medium | High quality |
Configuration:
export SUMMARIZATION_PROVIDER=openai
export SUMMARIZATION_MODEL=gpt-5-mini
export OPENAI_API_KEY=sk-proj-...Google's models with large context windows.
| Model | Speed | Use Case |
|---|---|---|
| gemini-3-flash | Fast | Cost-effective |
| gemini-3-pro | Medium | Higher quality |
Configuration:
export SUMMARIZATION_PROVIDER=gemini
export SUMMARIZATION_MODEL=gemini-3-flash
export GOOGLE_API_KEY=...xAI's conversational model.
| Model | Speed | Use Case |
|---|---|---|
| grok-4 | Medium | General use |
Configuration:
export SUMMARIZATION_PROVIDER=grok
export SUMMARIZATION_MODEL=grok-4
export XAI_API_KEY=...Privacy-first local summarization.
| Model | Speed | Use Case |
|---|---|---|
| llama4:scout | Fast | General purpose, lightweight |
| mistral-small3.2 | Fast | Balanced |
| qwen3-coder | Medium | Code-focused |
| gemma3 | Fast | Efficient |
| deepseek-coder-v3 | Medium | Code-optimized |
Configuration:
export SUMMARIZATION_PROVIDER=ollama
export SUMMARIZATION_MODEL=llama4:scout
export OLLAMA_BASE_URL=http://localhost:11434Setup:
# Pull the model first
ollama pull llama4:scoutIf no provider is specified, use AskUserQuestion:
Which summarization provider would you like to use?
Options:
1. Anthropic (claude-haiku-4-5-20251001) - High quality, code-aware (Recommended)
2. OpenAI (gpt-5-mini) - Fast, cost-effective
3. Gemini (gemini-3-flash) - Large context support
4. Grok (grok-4) - xAI's model
5. Ollama (llama4:scout) - Local, no API key required
# Set to Anthropic
/agent-brain-summarizer anthropic --model claude-haiku-4-5-20251001
# Set to OpenAI
/agent-brain-summarizer openai --model gpt-5-mini
# Set to Ollama
/agent-brain-summarizer ollama --model llama4:scoutGenerate the export commands:
# Add to your shell profile or .env file:
export SUMMARIZATION_PROVIDER=anthropic
export SUMMARIZATION_MODEL=claude-haiku-4-5-20251001Or update the project configuration:
agent-brain config set summarization_provider anthropic
agent-brain config set summarization_model claude-haiku-4-5-20251001After changing summarization providers, you may want to re-index to use the new summarizer:
# Re-index to apply new summarization
agent-brain reset --yes
agent-brain index /path/to/docsNote: Changing summarization provider doesn't require re-indexing for existing searches to work, but new summaries won't be generated until re-indexing.
# Verify summarization provider is working
agent-brain verify
# Test summarization
agent-brain test-summarize "def hello_world():\n print('Hello, World!')"Summarization Provider Configuration
====================================
Provider: anthropic
Model: claude-haiku-4-5-20251001
API Key: ANTHROPIC_API_KEY (configured)
Configuration saved. Re-index recommended for new summaries.
Summarization Provider Comparison
=================================
| Anthropic | OpenAI | Gemini | Ollama
------------------|---------------|---------------|---------------|-------------
Quality | Excellent | Very Good | Good | Good
Code Awareness | Excellent | Very Good | Good | Varies
Speed | Fast | Fast | Fast | Varies
Cost (1M tokens) | $0.80 input | $0.50 input | $0.10 input | Free
Privacy | Cloud | Cloud | Cloud | Local
Error: Unknown provider 'xyz'.
Valid options: anthropic, openai, gemini, grok, ollama
Error: Model 'invalid-model' not available for provider 'anthropic'.
Available models: claude-haiku-4-5-20251001, claude-sonnet-4-5-20250514, claude-opus-4-5-20251101
Error: Cannot connect to Ollama at http://localhost:11434
Resolution: Start Ollama with 'ollama serve' or check OLLAMA_BASE_URL
Error: ANTHROPIC_API_KEY not set for Anthropic provider.
Resolution: export ANTHROPIC_API_KEY="sk-ant-..."
-
/agent-brain-providers- List all available providers -
/agent-brain-embeddings- Configure embedding provider -
/agent-brain-verify- Verify configuration
- Design-Architecture-Overview
- Design-Query-Architecture
- Design-Storage-Architecture
- Design-Class-Diagrams
- GraphRAG-Guide
- Agent-Skill-Hybrid-Search-Guide
- Agent-Skill-Graph-Search-Guide
- Agent-Skill-Vector-Search-Guide
- Agent-Skill-BM25-Search-Guide
Search
Server
Setup
- Pluggable-Providers-Spec
- GraphRAG-Integration-Spec
- Agent-Brain-Plugin-Spec
- Multi-Instance-Architecture-Spec