| Tool | Description |
|---|---|
search_families |
Browse provider groupings (e.g. openai, openrouter/deepseek) |
search_models |
Find models with metadata — Elo, pricing, knowledge cutoff, notes |
refresh_models |
Force re-scan of providers and re-fetch enrichment data |
search_families
search(optional) — substring filter on family nameszdr(optional) — override ZDR filtering (bool)
search_models
search(optional) — substring filter on model identifierszdr(optional) — override ZDR filtering (bool)
refresh_models — no parameters
| Tool | Description |
|---|---|
completion |
Query any model by full ID or favourite shorthand |
model(required) — full model identifier (e.g.openai/gpt-5.2) or favourite shorthand (e.g.openai)prompt(required) — the user promptsystem(optional) — system prompttemperature(optional) — 0.0-2.0, omit to use model default
Shorthand resolves to the most-used model in that family. For example, if you use openai/gpt-5.2 most often, passing openai will route to it.
| Tool | Description |
|---|---|
start_research |
Launch a background deep research task with web search and citations |
check_research |
List all research jobs, or retrieve results for a specific job |
cancel_research |
Cancel a running research task |
start_research
model(required) — e.g.openrouter/perplexity/sonar-deep-researchquery(required) — the research questiontimeout(optional) — max seconds to wait (default 300)
check_research
job_id(optional) — specific job to retrieve. Omit to list all jobs.
cancel_research
job_id(required) — job to cancel
| Tool | Description |
|---|---|
generate_image |
Generate images from text prompts, returned inline and saved to disk |
model(required) — e.g.openai/gpt-image-1,gemini/gemini-2.5-flash-imageprompt(required) — text description of the imagesize(optional) — e.g.1024x1024,1536x1024(dedicated image models only)quality(optional) —low,medium,high,hd,standard(dedicated image models only)
| Tool | Description |
|---|---|
annotate_models |
Add or update a personal note on a model |
feedback |
Report usability issues or suggestions |
annotate_models
model(required) — full model identifiernote(required) — your note (overwrites any existing note)
feedback
issue(required) — what went wrong or what could be bettertool_name(optional) — which tool was involved
~/.ask-another-annotations.json is the single source of truth for model metadata, usage tracking, and personal notes.
Each model entry has three optional sections:
- metadata — automatically populated on startup:
arena_elo,knowledge_cutoff,organization,license,context_length,pricing_in,pricing_out,openrouter_listed,first_seen,last_updated - usage — tracked automatically on each
completioncall:call_count,last_used - annotations — set by you via
annotate_models:note
The top 5 models by call_count become your favourites. No configuration needed — just use the MCP and favourites emerge from actual usage. Favourites appear in the server instructions so your AI assistant knows your preferred models.
On startup (and when you call refresh_models), the server fetches:
- Elo ratings — from LMArena arena-catalog (GitHub JSON)
- Knowledge cutoff, organization, license — from LMArena metadata (HuggingFace CSV)
- Pricing, context length, listing date — from the OpenRouter API
Enrichment is fail-safe: if any source errors, the server continues with partial data. Data refreshes automatically when CACHE_TTL_MINUTES expires.
- Single file —
src/ask_another/server.pyis the entire server - LiteLLM — unified multi-provider LLM client
- FastMCP — MCP server framework
- No database — annotations JSON file + in-memory model cache
- Dynamic discovery — models fetched from provider APIs, no hardcoded model list
- Name matching — arena metadata is matched to provider models via normalized model names (strip provider prefix, dates, common suffixes)
Requires Python 3.10+ and uv.
git clone https://github.com/matthewgjohnson/ask-another
cd ask-another
uv syncPROVIDER_OPENAI="openai;sk-your-key" uv run ask-anotherWith debug logging:
LOG_LEVEL=DEBUG PROVIDER_OPENAI="openai;sk-your-key" uv run ask-anotherProvider credentials are required for manual testing of provider-backed tools, but all tests run without API keys.
Full suite:
uv run --with pytest python -m pytest tests/ -vIndividual test files:
uv run --with pytest python -m pytest tests/test_annotations.py -vTest map:
test_annotations.py— enrichment, normalisation, metadata, searchtest_feedback.py— feedback tool and JSONL loggingtest_image_generation.py— image generation pathstest_logging.py— log config and rotation
src/ask_another/server.py— the entire servertests/— all tests, mocked (no network access needed)
The server persists state in ~/.ask-another-annotations.json. If behaviour seems wrong during development, inspect or remove this file to reset.
Please run the full test suite locally before opening a PR.