All notable changes to ClawRouter.
-
Propagate
openai/gpt-5.5everywhere it should appear. v0.12.167 added the model toBLOCKRUN_MODELS, thegpt-5.5alias, and the install-scriptTOP_MODELSallowlist — but every other place ClawRouter advertises a flagship still pointed atgpt-5.4. This release closes the gap so 5.5 is a first-class citizen across routing, the picker, marketing, and the OpenClaw skill page.-
src/router/config.ts— three fallback-chain insertions, no primary changes.openai/gpt-5.5slots in immediately beforeopenai/gpt-5.4inauto.COMPLEX.fallback,premiumTiers.COMPLEX.fallback, andagenticTiers.COMPLEX.fallback. Both stay reachable; 5.5 gets preference when the chain reaches OpenAI. Comments updated so 5.5 is "newest flagship — 1M+ ctx, native agent + computer use" and 5.4 is "previous flagship — benchmarked at 6,213ms, IQ 57". Tier primaries are unchanged: promoting 5.5 to a primary slot needs measured latency/IQ data, which we don't have yet — that's a separate decision tracked outside this release. -
src/index.ts—/modelpicker allowlist updated.src/index.tscarries its own copy ofTOP_MODELS(separate from the install scripts' identical-but-distinct list — both populate the OpenClaw allowlist depending on install path). Addedopenai/gpt-5.5andanthropic/claude-opus-4.5(also missed in v0.12.167'sBLOCKRUN_MODELSadd for opus-4.5), and replaced the now-deprecatedminimax/minimax-m2.5withminimax/minimax-m2.7so the picker matches the deprecation we landed yesterday. -
README.md— Premium Models pricing table. Added theopenai/gpt-5.5row at $5.00/$30.00 per 1M tokens (~$0.0175 per 0.5K-in-0.5K-out request), 1M context, full feature set. Placed betweenclaude-opus-4.6($0.0150) ando1($0.0375) so the table stays sorted by approximate $ /request. -
skills/clawrouter/SKILL.md— model list line. The "55+ models including..." line now leadsgpt-5.5, gpt-5.4, ...and includesclaude-opus-4.5alongside 4.7/4.6.
-
-
Files deliberately not touched:
docs/smart-llm-router-14-dimension-classifier.mdanddocs/llm-router-benchmark-46-models-sub-1ms-routing.mdare frozen benchmark archives — adding 5.5 to a benchmark table without measured numbers would falsify the document. Theposts/*.mdmarketing content is similarly point-in-time. Those will be refreshed if/when 5.5 gets benchmarked.
-
Realign the model registry to BlockRun source-of-truth. Audit found three drifts where ClawRouter's
BLOCKRUN_MODELStable didn't match whatblockrun/src/lib/models.tsactually exposes. The server is the source of truth for which models exist and what they cost; the proxy's local view should mirror that 1:1 so cost estimation, the/modelpicker, and routing tier selection all see the same world the server does.-
Add
openai/gpt-5.5. BlockRun's newest visible OpenAI flagship — first fully retrained base since GPT-4.5, 1M+ context, 128K output, native agent + computer use. Pricing $5/$30 per 1M tokens. Added toBLOCKRUN_MODELS, thegpt-5.5alias, and theTOP_MODELSallowlist in both install scripts. Routing tiers insrc/router/config.tscontinue to anchor ongpt-5.4because that's what's benchmarked; users can pin5.5explicitly. Routing change is a separate decision. -
Add
anthropic/claude-opus-4.5as a distinct model. Previously ClawRouter'sMODEL_ALIASESsilently rewroteanthropic/claude-opus-4.5to4.7, making 4.5 unreachable through ClawRouter even though BlockRun lists it as a separate visible model with its own pricing and 200K context (vs 4.6/4.7's 1M). Removed the alias, added 4.5 toBLOCKRUN_MODELSwith its real 200K/32K shape, and added ananthropic/claude-opus-4-5(dashed) alias for the slug variant. Test insrc/models.test.tswas codifying the old upgrade-to-4.7 behavior — flipped to assert the pin is preserved end-to-end. -
Mark
minimax/minimax-m2.5deprecated → fallbackminimax/minimax-m2.7. BlockRun retired m2.5 entirely (only m2.7 is in theirMODELStable). ClawRouter still listed both; m2.5 now flips todeprecated: truewith the m2.7 fallback so existing pins keep working. -
scripts/reinstall.sh+scripts/update.sh: dropminimax/minimax-m2.5from theTOP_MODELSpicker allowlist (still reachable, just hidden from the picker) and addopenai/gpt-5.5+anthropic/claude-opus-4.5.
-
Add
- Tool-call planning prose suppressed even when
finish_reasonis the only signal (thanks @0xCheetah1, #162). Follow-up to v0.12.165's #161 fix. Live Telegram/OpenClaw testing caught one more shape the planning-prose leak could wriggle through: some upstreams (Moonshot Kimi K2.6 again) mark a turn withfinish_reason: "tool_calls"without exposingmessage.tool_calls/delta.tool_callsat the same inspection point. The #161 gate (toolCalls.length > 0) saw no array and let the prose through. The gate is nowendsWithToolCalls || toolCalls.length > 0— applied consistently across the non-streaming JSON path and the SSE emission path, plus the finish-reason override in the SSE terminal chunk. Two new regression tests insrc/proxy.tool-forwarding.test.ts— one per response shape — lock the behavior in: a response withfinish_reason: "tool_calls"and no tool_calls array has itscontentblanked and thetool_callsfinish_reason preserved. User-visible impact: fewer "I should look up X before replying" preambles sneaking into agent chat surfaces for turns that are supposed to be pure tool invocations.
- Tool-call planning prose no longer leaks to chat surfaces (thanks @0xCheetah1, #161). Some OpenAI-compatible providers — Moonshot's Kimi K2.6 was the visible offender through OpenClaw Telegram — return
{ content: "The user wants the current time. I should call get_current_time with Chicago.", tool_calls: [...] }. Tool execution only needstool_calls; thecontentfield is internal planning that the upstream should have hidden behind a<think>tag but didn't. ClawRouter now suppressescontentwhenevertool_calls.length > 0, in both the non-streaming JSON response path and the SSE-conversion path that clients like OpenClaw hit withstream: true. Tool execution is unaffected; only the user-visible planning prose goes away. Covered by two regression tests insrc/proxy.tool-forwarding.test.ts(one per response shape). - Plugin restart loop killed.
injectModelsConfig()insrc/index.tswrites ClawRouter-owned keys into~/.openclaw/openclaw.jsonon every plugin load. OpenClaw's config watcher has a catch-all rule — any change with no matching plugin-declared prefix triggers a full gateway restart — somcp.servers.blockrunwrites kept ping-ponging the gateway. The plugin definition now exposesreload: { noopPrefixes: ["mcp.servers.blockrun"] }(new optional field onOpenClawPluginDefinition) to tell OpenClaw's loader that ClawRouter self-manages that prefix. Silently ignored on OpenClaw runtimes that predate thereloadfield. - Dedup + response cache now isolate streaming and non-streaming callers. Discovered while adding the SSE regression test for the tool-call fix: a
stream: truerequest that followed an identical-bodystream: falserequest was gettingcontent-type: application/jsoninstead oftext/event-stream. Two compounding bugs. ClawRouter rewritesparsed.stream = falsebefore the upstream call (BlockRun API doesn't support streaming), and bothRequestDeduplicator.hash(body)andResponseCache.generateKey(body)ran AFTER that rewrite — so astream:trueandstream:falserequest hashed identically. Worse,response-cache.ts'snormalizeForCacheexplicitly strippedstreamfrom the key with the comment "we handle streaming separately" (it never did). Fix: (1) prefix bothdedupKeyandcacheKeyinsrc/proxy.tswith the originalisStreamingintent ("sse:"vs"json:"), so the two shapes never share a cache slot; (2) stop strippingstreaminnormalizeForCache. Latent bug — real-world impact was small because the exact scenario (identical body, different stream flag, within 30s/10min TTL) is rare in practice — but a correctness bug nonetheless. Regression test added (isolates dedup cache between streaming and non-streaming requests with identical bodies); the existingresponse-cache.test.tsexpectation was inverted (it was codifying the broken behavior).
-
Video generation switched to async submit + poll (tracks BlockRun server commit 654cd35). The server-side
/v1/videos/generationsendpoint no longer blocks for the full 60–180s upstream generation — POST now returns202 { id, poll_url }in ~3–20s, and a separate GET on thepoll_url(same x-payment header) returns202while the job is queued/in_progress and200with the final video on completion. Server settles only on the first completed poll, so upstream failure or caller disconnect = zero USDC charged. ClawRouter's proxy handler insrc/proxy.tsnow collapses this back into a single blocking POST for the client: submit upstream, poll thepoll_urlevery 5s (initial 3s grace) up to a 5-min deadline, then backup + serve locally as before. Legacy sync-shaped server responses still work — the handler checks forpoll_urlbefore switching to the poll loop. Client-side timeouts bumped:buildVideoGenerationProvider.timeoutMs200s → 330s;/videogenslash 200s → 330s; both sit above the 5-min internal poll deadline so the lastdata[0].urlfinishes streaming back. User-facing impact: same blocking POST as before, but Cloudflare's 100s edge timeout no longer kills long-running Seedance 2.0 jobs. -
Image/video plumbing parity — four exposure surfaces now match the backend. The BlockRun server has supported 8 image models (DALL-E 3, GPT Image 1, Nano Banana / Pro, Flux 1.1 Pro, Grok Imagine / Pro, CogView-4) and 4 video models (Grok Imagine, Seedance 1.5 Pro / 2.0 Fast / 2.0) since v0.12.162, but the ClawRouter client exposed them inconsistently:
buildImageGenerationProviderinsrc/index.tsonly advertised 4 image models. OpenClaw's native image picker couldn't see Flux, Grok Imagine (×2), or CogView-4 — the only way to hit them was raw curl with an explicitmodelfield. Themodelsarray now lists all 8; defaultModel switched fromopenai/gpt-image-1togoogle/nano-banana(cheapest general-purpose default);capabilities.geometry.sizesadds CogView-4's 512x512, 768x768, 768x1344, 1344x768, 1440x1440 sizes;capabilities.edit.enabledflipped totrueso OpenClaw's edit UI surfaces gpt-image-1's/v1/images/image2imagepath.MODEL_ALIASESinsrc/models.tshad zero image/video shortcuts. All 140+ aliases were LLM chat models. Added 17 new aliases soresolveModelAlias("dalle")→openai/dall-e-3,"flux"→black-forest/flux-1.1-pro,"seedance"→bytedance/seedance-1.5-pro, plusbanana,banana-pro,nano-banana-pro,gpt-image,flux-pro,grok-imagine/-pro,grok-video,cogview,seedance-1.5,seedance-2,seedance-2-fast./imagegenand/videogenslash commands now actually exist. README documented/imagegen a dog dancing on the beachas if it worked, but no such command was ever registered — it was silent drift from the aspirational README. Both commands now register viaapi.registerCommand, accept--model=<alias>,--size=WxH,--n=<int>,--duration=<5|8|10>flags (parsed by a sharedparseGenArgshelper), resolve aliases throughresolveModelAlias, POST to the proxy's/v1/images/generationsand/v1/videos/generationsendpoints, and return inline markdown () or video URLs. 402 responses surface as "top up with/wallet" hints; video timeout is 200s to cover upstream polling./img2imgremains README-only for now — will land in a follow-up.- Partner framework now includes image/video as LLM-callable tools. Added three new
PartnerServiceDefinitionentries insrc/partners/registry.ts—image_generation,image_edit,video_generation— so the existingbuildPartnerTools→api.registerToolpipeline surfaces them asblockrun_image_generation,blockrun_image_edit,blockrun_video_generationtools. Agents can now tool-call image/video from chat without the skill layer guessing at raw HTTP shapes.
-
Dropped the Twitter/X user-lookup partner. We no longer run X data as a product surface. Removed
x_users_lookupfromPARTNER_SERVICES, deleted theskills/x-api/skill directory, and strippedx|from the/v1/(?:x|partner|pm|...)/paid-route regex insrc/proxy.ts(so/v1/x/*no longer short-circuits to the partner proxy — it now falls through to the usual chat-completion path or 404s cleanly). Server-side/v1/x/*endpoints are still live atblockrun.ai/apifor any existing integrations; only the client wiring is retired. -
/partners+clawrouter partnersCLI output compressed ~4×. Previously 6 lines per service (name, full agent-facing description, tool name, method, pricing block, blank) × 17 services ≈ 100 lines of wall-of-text, which is what @vicky was calling out as "读不了" (unreadable).PartnerServiceDefinitiongained two fields —category("Prediction markets" / "Market data" / "Image & Video") andshortDescription(≤ 40 chars) — driving a new grouped, column-aligned one-liner per tool. The longdescriptionfield stays intact for the LLM-facing JSON Schema (agents still see "Call this ONLY when..." guidance). Output is now ~25 lines, one screen.
- README leads with the free tier. Post-v0.12.160 the product story changed — 8 NVIDIA models free forever, no wallet required to start — but the README still opened "fund your wallet" as step 2 of Quick Start and buried the free tier in a single line at the bottom. Rewrites so the free tier is the hook, not a footnote: hero tagline adds "8 models free, no crypto required. No signup. No API key. No credit card." plus a 🆓 shields.io badge; the "Why ClawRouter exists" list opens with "Starts at $0"; the comparison-vs-others table adds a "Free tier" row showing ClawRouter's "8 models, no signup" against OpenRouter's rate limits and LiteLLM/Martian/Portkey's "no"; Quick Start gets a "No wallet? 8 models work free out of the box" callout and reframes step 2 as optional; routing-profiles table adds
/model freeat 100% savings; the Costs section lists the current 8 free model IDs by name (was a stale 11-model list referencing the retired Nemotron Ultra / Mistral Large / Devstral). This release is README-only — code is identical to v0.12.162 — version bump exists so the updated marketing reaches the npmjs.com package page and the clawhub marketplace listing.
-
ByteDance Seedance video models wired into the client. BlockRun server has exposed three Seedance models since late April —
bytedance/seedance-1.5-pro($0.03/sec),bytedance/seedance-2.0-fast($0.15/sec, ~60–80s gen time), andbytedance/seedance-2.0Pro ($0.30/sec) — all 720p, text-to-video + image-to-video, 5s default and up to 10s. The/v1/videos/generationsproxy passthrough insrc/proxy.tsalready forwarded anymodelvalue untouched, so actual USDC charges were always correct (server dictates the amount in its 402 response andpayment-preauth.tscaches the server-sentPaymentRequired, not a local estimate — charges never depended on ClawRouter's local pricing table). Three client-side gaps were fixed anyway:-
Usage telemetry was wrong for Seedance.
estimateVideoCostinsrc/proxy.tsonly knewxai/grok-imagine-video, so every Seedance request logged$0.42/cliptologUsageregardless of what the user was actually billed — skewing/usageoutput, savings %, and journal cost fields.VIDEO_PRICINGnow carries all four models at real server rates. -
OpenClaw's native video UI only saw one model.
buildVideoGenerationProviderinsrc/index.tsadvertisedmodels: ["xai/grok-imagine-video"], so users of the UI picker couldn't pick Seedance at all; the only path was raw curl with an explicitmodelfield. Themodelsarray now lists all four, and provider capabilities widen tomaxDurationSeconds: 10/supportedDurationSeconds: [5, 8, 10]to cover both vendors' ranges (server still validates per-modelmaxDurationSeconds, so invalid combos return a clean 400). -
README docs only mentioned Grok. Video-generation section now lists all four models in the table, swaps the curl example to
bytedance/seedance-2.0-fast(sweet-spot price/quality), and makes the upstream-polling note vendor-neutral instead of xAI-specific.
-
Usage telemetry was wrong for Seedance.
-
Docs: fixed proxy port in free-models guide. Thanks to @Bortlesboat (#160) for catching
4402→8402typos indocs/11-free-ai-models-zero-cost-blockrun.md. The rest of the repo,src/config.ts(DEFAULT_PORT = 8402), and all other docs have always said 8402; that one guide was sending new users at the wrong local port.
- De-Gemini the Anthropic-primary fallback chains. When Anthropic hiccups (503s, capacity), Gemini's own "high demand" 503s correlate with the same events — agents fall back from Claude to Gemini together, both overloaded. Reordered
src/router/config.tsfallback arrays in the two places Anthropic sits primary:premiumTiers.COMPLEX(claude-opus-4.7 primary) andagenticTiers.COMPLEX(claude-sonnet-4.6 primary). New order: in-family Anthropic hot swap (opus-4.6 / sonnet-4.6) → xAI Grok (independent infra, strong on complex + tool use) → Moonshot Kimi K2.6 / K2.5 (separate Moonshot infra) → OpenAI flagship (slow but reliable) → DeepSeek (cheap reliable) →free/qwen3-coder-480b(NVIDIA free ultimate backstop). Gemini removed entirely from both chains. Other Anthropic-primary tiers (premiumTiers.REASONING,agenticTiers.REASONING) already had no Gemini and were not touched.
- Free-tier catalog realigned with BlockRun server (13 → 8 NVIDIA free models). BlockRun retired five NVIDIA free models on 2026-04-21 (
nemotron-ultra-253b,nemotron-3-super-120b,nemotron-super-49b,mistral-large-3-675b,devstral-2-123b) and introduced two new ones benchmark-validated at 114–116 tok/s (qwen3-next-80b-a3b-thinking— fastest free reasoning;mistral-small-4-119b— fastest free chat). ClawRouter now exposes the same 8 visible free models:gpt-oss-120b,gpt-oss-20b,deepseek-v3.2,qwen3-coder-480b,glm-4.7,llama-4-maverick,qwen3-next-80b-a3b-thinking,mistral-small-4-119b. Retired IDs still resolve locally viaMODEL_ALIASESredirects to successors (free/nemotron-*→free/qwen3-next-80b-a3b-thinking,free/mistral-large-3-675b→free/mistral-small-4-119b,free/devstral-2-123b→free/qwen3-coder-480b), matching server-side behavior so stale user configs keep working. Touched:BLOCKRUN_MODELS+MODEL_ALIASESinsrc/models.ts,FREE_MODELSset insrc/proxy.ts, free-model list insrc/index.tspicker,MODEL_PRICINGfixture insrc/router/strategy.test.ts,scripts/update.sh+scripts/reinstall.shTOP_MODELS+ slash-command help, README Budget Models pricing table + Free tier note, skills/clawrouter/SKILL.md description + Available Models section. - Kimi K2.5 routing inverted: Moonshot direct is now primary. NVIDIA-hosted
nvidia/kimi-k2.5was retired 2026-04-21 (slow throughput) and redirects server-side tomoonshot/kimi-k2.5. ClawRouter mirrors this:moonshot/kimi-k2.5is the primary entry (no deprecation flag, full 16K output),nvidia/kimi-k2.5retained but markeddeprecated: truewithfallbackModel: "moonshot/kimi-k2.5". Aliaseskimi/moonshot/kimi-k2.5/nvidia/kimi-k2.5all resolve tomoonshot/kimi-k2.5. Router tier configs insrc/router/config.ts(auto + premium + agentic profiles, 7 occurrences) updated to point at the Moonshot variant.
- Market data tools — BlockRun gateway now exposes realtime and historical market data; ClawRouter wires them into OpenClaw as 6 first-class agent tools so the model stops scraping finance sites. Paid ($0.001 via x402, same wallet as LLM calls):
blockrun_stock_priceandblockrun_stock_historyacross 12 global equity markets (US, HK, JP, KR, UK, DE, FR, NL, IE, LU, CN, CA). Free (no x402 charge):blockrun_stock_list(ticker lookup / company-name search),blockrun_crypto_price(BTC-USD, ETH-USD, SOL-USD, …),blockrun_fx_price(EUR-USD, GBP-USD, JPY-USD, …),blockrun_commodity_price(XAU-USD gold, XAG-USD silver, XPT-USD platinum). Tool schemas advertise market codes, session hints (pre/post/on), and bar resolutions (1/5/15/60/240/D/W/M). Path routing extended: the partner-proxy whitelist insrc/proxy.tsnow matches/v1/(?:x|partner|pm|exa|modal|stocks|usstock|crypto|fx|commodity)/, routing all new paths throughproxyPaidApiRequest(payFetch handles 402 when present, passes through 200 for free categories). Tool definitions added insrc/partners/registry.ts;skills/clawrouter/SKILL.mdgains a "Built-in Agent Tools" section listing market data + X intelligence + Polymarket alongside the LLM router.
- SKILL.md data-flow + key-storage transparency — second-pass fix for the OpenClaw scanner on clawhub.ai. After v0.12.157 cleared the original scanner concerns (opaque credentials, implied multi-provider keys, no install artifact), a deeper rescan surfaced three new, more nuanced flags: (1) prompts go to blockrun.ai as a data-privacy risk not obvious from a "local router" framing, (2) wallet private-key storage location/encryption undocumented, (3) users may expect strictly-local routing. All three addressed: (a) description frontmatter and body lead reframed as "Hosted-gateway LLM router" + "This is not a local-inference tool" with explicit Ollama pointer for users who need local-only, (b) new Data Flow section with ASCII diagram + enumerated sent/not-sent lists + link to https://blockrun.ai/privacy, (c) new Credentials & Local Key Storage section documenting config file locations per OS (
~/.config/openclaw,~/Library/Application Support/openclaw,%APPDATA%\openclaw),0600POSIX permissions, plaintext storage parity with other OpenClaw provider keys, encryption guidance (FileVault/LUKS/BitLocker or burner wallet), and asrc/wallet.tssource pointer for key-derivation auditing, (d) new Supply-Chain Integrity section withnpm packverification instructions and tagged-release invariant from the release checklist.
- SKILL.md credential transparency — rewrote
skills/clawrouter/SKILL.mdto clear the OpenClaw scanner's medium-confidence suspicious verdict on clawhub.ai. Frontmatter now declaresrepository: https://github.com/BlockRunAI/ClawRouter,license: MIT, and a structuredmetadata.openclaw.installarray (kind: node,package: @blockrun/clawrouter,bins: [clawrouter]) so the registry entry has an auditable install artifact instead of a bare bash block. Body adds a Credentials & Data Handling section fully enumerating whatmodels.providers.blockrunstores (walletKey/solanaKey— auto-generated locally, never transmitted;gateway/routing— non-sensitive), and explicitly states the plugin does not collect or forward third-party provider API keys (OpenAI/Anthropic/Google/DeepSeek/xAI/NVIDIA) — the blockrun.ai gateway owns those relationships and routes on the server side. Addresses the three scanner flags (opaque credential declaration, implied multi-provider credential collection, no install artifact for review) raised against v0.12.156 on https://clawhub.ai/1bcmax/clawrouter.
-
Kimi K2.6 added — Moonshot's new flagship (
moonshot/kimi-k2.6, 256K context, vision + reasoning, $0.95 in / $4.00 out per 1M) registered inBLOCKRUN_MODELSwithkimi-k2.6alias. Added to the curated/modelpicker list (src/index.ts,scripts/update.sh,scripts/reinstall.sh), the README pricing table,docs/routing-profiles.md, and the AI-agent-facing model catalog inskills/clawrouter/SKILL.md. Premium routing tier (blockrun/premium) now uses K2.6 as the SIMPLE primary and as a fallback in MEDIUM/COMPLEX, withnvidia/kimi-k2.5retained as the first fallback for reliability. The generickimi/moonshotaliases still resolve tonvidia/kimi-k2.5(matches BlockRun server'sblockrun/kimistance); users opt in to K2.6 explicitly viakimi-k2.6orblockrun/premium. -
GitHub restored as canonical source — BlockRunAI GitHub org is back.
package.jsonrepository.url, README badges, CONTRIBUTING clone URL,openclaw.security.json, all docs (anthropic-*,clawrouter-cuts-*,clawrouter-vs-openrouter,11-free-ai-models,llm-router-benchmark-*,smart-llm-router-14-dimension-classifier,subscription-failover,troubleshooting),skills/release/SKILL.md, and thesse-error-formatregression-test comment now point atgithub.com/BlockRunAI/ClawRouter. GitLab mirror (gitlab.com/blockrunai/ClawRouter) is kept as a secondary remote for redundancy but is no longer advertised. Metadata + docs only; no runtime/code changes.
-
Docs: video generation endpoint — README now documents
POST /v1/videos/generationswithxai/grok-imagine-video($0.05/sec, 8s default). The proxy handler, cost estimator (estimateVideoCost), and local-file download path were already in place inproxy.ts; only the README was missing. -
Docs: Grok Imagine image models — README image table now includes
xai/grok-imagine-image($0.02) andxai/grok-imagine-image-pro($0.07), already wired into the image pricing map.
-
Claude Opus 4.7 flagship — BlockRun API has promoted
anthropic/claude-opus-4.7to flagship (1M context, 128K output, adaptive thinking; $5/$25 per 1M tokens). Added toBLOCKRUN_MODELS, now the primary for theCOMPLEXrouting tier across default/premium profiles and the new cost-savingsBASELINE_MODEL_ID. Aliases:opus,opus-4,anthropic/opus,anthropic/claude-opus-4, andanthropic/claude-opus-4.5now resolve to 4.7. Explicit 4.6 pins (opus-4.6,anthropic/claude-opus-4-6) still route to 4.6, which the server keeps available. Opus 4.7 is also added to the curatedTOP_MODELSpicker list anddoctorcommand. Opus 4.6 ClawRouter metadata updated to match server specs (1M/128K, was stale at 200K/32K).
- Repository URL fixed —
package.jsonrepository.urlnow points atgitlab.com/blockrunai/ClawRouter. The previous value (github.com/BlockRunAI/ClawRouter) has been dead since the GitHub org was banned 2026-04-15. Metadata-only bump; no code changes.
- Stop bundling blockrun-mcp — ClawRouter no longer auto-injects
mcp.servers.blockruninto~/.openclaw/openclaw.json. Thenpx -y @blockrun/mcp@latestspawns were leaking shell-wrapper + node grandchildren processes on the host (see reports of 70+ orphaned processes accumulating). Removal of the injection call is matched by a one-shot migration that strips any previously managedmcp.servers.blockrunentry the next time the gateway starts. User-definedblockrunMCP entries are preserved. Restart your gateway after upgrading to free any already-leaked processes. Users who still want the MCP bridge can opt in manually:openclaw mcp add blockrun npx -y @blockrun/mcp@latest.
- Predexon tools registered — 8 Predexon endpoints now registered as real OpenClaw tools (
blockrun_predexon_events,blockrun_predexon_leaderboard,blockrun_predexon_markets,blockrun_predexon_smart_money,blockrun_predexon_smart_activity,blockrun_predexon_wallet,blockrun_predexon_wallet_pnl,blockrun_predexon_matching_markets). Agent will now call these directly instead of falling back to browser scraping. - Partner tools GET support —
tools.tsexecute function now handles GET endpoints with query params and path param substitution (:wallet,:condition_id, etc.).
- Skill priority fix —
predexonandx-apiskills now explicitly instruct the agent not to use browser/web_fetch for these data sources, ensuring the structured API is always used over scraping.
- Predexon skill — New vendor skill ships with ClawRouter: 39 prediction market endpoints (Polymarket, Kalshi, dFlow, Binance, cross-market matching, wallet analytics, smart money). OpenClaw agents now auto-invoke this skill when users ask about prediction markets, market odds, or smart money positioning.
- Partner proxy extended —
/v1/pm/*paths now route through ClawRouter's partner proxy (same as/v1/x/*), enabling automatic x402 payment for all Predexon endpoints vialocalhost:8402.
-
Free model cost logging — Usage stats incorrectly showed non-zero cost for free models (e.g.
free/gpt-oss-120bshowed $0.001 per request due to theMIN_PAYMENT_USDfloor incalculateModelCost). Free models now logcost: $0.00andsavings: 100%, accurately reflecting that no payment is made.
/doctorchecks correct chain balance — Previously always checked Base (EVM), showing $0.00 for Solana-funded wallets. Now callsresolvePaymentChain()and usesSolanaBalanceMonitorwhen on Solana. Shows active chain label and hints to run/wallet solanaif balance is empty on Base.- Strip thinking tokens from non-streaming responses — Free models leaked
<think>...</think>blocks in non-streaming responses.stripThinkingTokens()was only applied in the streaming path — now also runs on non-streaming JSON responses. - Preserve OpenClaw channels on install/update —
reinstall.shandupdate.shnow backup~/.openclaw/credentials/beforeopenclaw plugins installand always restore after, preventing WhatsApp/Telegram channel disappearance.
- Blog section in README — 6 blog posts linked from the repo, including "11 Free AI Models, Zero Cost".
- BRCC ecosystem block — Replaced SocialClaw with BRCC (BlockRun for Claude Code) in the README ecosystem section.
blockrun.ai/brcc-installshort link — Redirect for BRCC install script.
- 11 free models — GPT-OSS 20B/120B, Nemotron Ultra 253B, Nemotron Super 49B/120B, DeepSeek V3.2, Mistral Large 3, Qwen3 Coder 480B, Devstral 2 123B, GLM 4.7, Llama 4 Maverick. All free, no wallet balance needed.
/model freealias — Points to nemotron-ultra-253b (strongest free model). All 11 free models individually selectable via/modelpicker.- New model aliases —
nemotron,devstral,qwen-coder,maverick,deepseek-free,mistral-free,glm-free,llama-free, and more (16 total).
- Skills not found by OpenClaw agents — Auto-copies bundled skills (imagegen, x-api, clawrouter) to
~/.openclaw/workspace/skills/on plugin registration. FixesENOENTerrors when agents invoke/imagegen. - Internal
releaseskill excluded — No longer installed to user workspaces. - Sync package-lock.json
- Skills not found by OpenClaw agents — Agents tried to read skill files (imagegen, x-api, etc.) from
~/.openclaw/workspace/skills/but ClawRouter only bundled them inside the npm package. Now auto-copies all user-facing bundled skills into the workspace directory on plugin registration. SupportsOPENCLAW_PROFILEfor multi-profile setups. Only updates when content changes. FixesENOENT: no such file or directoryerrors when agents invoke/imagegen. - Internal
releaseskill excluded — The release checklist skill is for ClawRouter maintainers only and is no longer installed to user workspaces. - Sync package-lock.json — Lock file was stuck at v0.12.69, now matches package.json.
- Plugin crash on string model config — ClawRouter crashed during OpenClaw plugin registration with
TypeError: Cannot create property 'primary' on string 'blockrun/auto'. This happened whenagents.defaults.modelin the OpenClaw config was a plain string (e.g."blockrun/auto") instead of the expected object{ primary: "blockrun/auto" }. Now auto-converts string/array/non-object model values to the correct object form.
- Config duplication on update —
update.shandreinstall.shaccumulated staleblockrun/*model entries inopenclaw.jsonon every update because only 2 hardcoded deprecated models were removed. Now performs a full reconciliation: removes anyblockrun/*entries not in the currentTOP_MODELSlist before adding new ones. Non-blockrun entries are untouched.
- OpenClaw skills registration — added
"skills": ["./skills"]toopenclaw.plugin.jsonso OpenClaw actually loads bundled skills (was missing, skills were never active) - imagegen skill — new
skills/imagegen/SKILL.md: teaches Claude to generate images viaPOST /v1/images/generations, model selection table (nano-banana, banana-pro, dall-e-3, flux), size options, example interactions - x-api skill — new
skills/x-api/SKILL.md: teaches Claude to look up X/Twitter user profiles viaPOST /v1/x/users/lookup, with pricing table, response schema, and example interactions
- Image generation docs — new
docs/image-generation.mdwith API reference, curl/TypeScript/Python/OpenAI SDK examples, model pricing table, and/imagegencommand reference - Comprehensive docs refresh — architecture updated for dual-chain (Base + Solana), configuration updated with all env vars (
CLAWROUTER_SOLANA_RPC_URL,CLAWROUTER_WORKER), troubleshooting updated for USDC-on-Solana funding, CHANGELOG backfilled for v0.11.14–v0.12.24
- Preserve user-defined blockrun/* allowlist entries —
injectModelsConfig()no longer removes user-addedblockrun/*allowlist entries on gateway restarts
/chaincommand — persist payment chain selection (Base or Solana) across restarts via/chain solanaor/chain base- Update nudge improved — now shows
npx @blockrun/clawrouter@latestinstead ofcurl | bash - Zero balance cache fix — funded wallets are detected immediately (zero balance never cached)
wallet recovercommand — restorewallet.keyfrom BIP-39 mnemonic on a new machine- Solana balance retry — retries once on empty to handle flaky public RPC endpoints
- Balance cache invalidated at startup — prevents false free-model fallback after fresh install
- openai/ prefix routing fix — virtual profiles (
blockrun/auto, etc.) now handleopenai/prefix injected by some clients - Body-read timeout increased — 5-minute timeout for slow reasoning models prevents proxy hangs
- Server-side update nudge — 429 responses from BlockRun now surface update hints when running an outdated ClawRouter version
- Body-read timeout — prevents proxy from hanging on stalled upstream streams
- @solana/kit version fix — pinned to
^5.0.0to resolve cross-version signing bug causingtransaction_simulation_failed(#74) /stats clearcommand — reset usage statistics- Gemini 3 models excluded from tool-heavy routing (#73)
- GPT-5.4 and GPT-5.4 Pro — added to model catalog
- Force agentic tiers on tool presence — requests with
toolsarray always route to agentic-capable models
- Solana sweep fix — correctly attaches signers to sweep transaction message (#70)
- Multi-account sweep — correctly handles partial reads and JSONL resilience in sweep migration
- SPL Token Program ID fix — corrected in Solana sweep transaction
Full Solana chain support. Pay with USDC on Solana (not SOL) alongside Base (EVM).
- SLIP-10 Ed25519 derivation — Solana wallet uses BIP-44 path
m/44'/501'/0'/0', compatible with Phantom and other wallets (#69) SolanaBalanceMonitor— reads SPL Token USDC balance;proxy.tsselects EVM or Solana monitor based on active chain- Solana address shown in
/wallet— displays both EVM (0x...) and Solana (base58) addresses - Health endpoint — returns Solana address alongside EVM address
- Pre-auth cache skipped for Solana — prevents double payment on Solana chain
- Startup balance uses chain-aware monitor — fixes EVM-only startup log when Solana is active
- Chain-aware proxy reuse — validates payment chain matches on EADDRINUSE path
etherspeer dep — added for@x402/evmvia SIWE compatibility
- Free model fallback notification — notifies user when routing to
gpt-oss-120bdue to insufficient USDC balance
- Input token logging — usage logs now include
inputTokensfrom provider responses
- Gemini 3.x in allowlist — replaced Gemini 2.5 with Gemini 3.1 Pro and Gemini 3 Flash Preview
- Top 16 model allowlist — trimmed from 88 to 16 curated models in
/modelpicker (4 routing profiles + 12 popular models)
- Populate model allowlist — populate
agents.defaults.modelswith BlockRun models so they appear in/modelpicker
- Auto-fix broken allowlist —
injectModelsConfig()detects and removes blockrun-only allowlist on every gateway start
- Allowlist cleanup in reinstall.sh — detect and remove blockrun-only allowlist that hid all other models
clawrouter reportcommand — daily/weekly/monthly usage reports vianpx @blockrun/clawrouter reportclawrouter doctorcommand — AI diagnostics for troubleshooting
- catbox.moe image hosting —
/imagegenuploads base64 data URIs to catbox.moe (replaces broken telegra.ph)
- Image upload for Telegram — base64 data URIs from Google image models converted to hosted URLs
- Output raw image URL —
/imagegenreturns plain URL instead of markdown syntax for Telegram compatibility
Session-level repetition detection: 3 consecutive identical request hashes auto-escalate to the next tier (SIMPLE → MEDIUM → COMPLEX → REASONING). Fixes Kimi K2.5 agentic loop problem without manual model switching.
Generate images from chat. Calls BlockRun's image generation API with x402 micropayments.
/imagegen a cat wearing sunglasses
/imagegen --model dall-e-3 a futuristic city
/imagegen --model banana-pro --size 2048x2048 landscape
| Model | Shorthand | Price |
|---|---|---|
| Google Nano Banana (default) | nano-banana |
$0.05/image |
| Google Nano Banana Pro | banana-pro |
$0.10/image (up to 4K) |
| OpenAI DALL-E 3 | dall-e-3 |
$0.04/image |
| OpenAI GPT Image 1 | gpt-image |
$0.02/image |
| Black Forest Flux 1.1 Pro | flux |
$0.04/image |
- Stop hijacking model picker — removed allowlist injection that hid non-BlockRun models from
/modelpicker - Silent fallback to free model — insufficient funds now skips remaining paid models and jumps to the free tier instead of showing payment errors
- Anthropic array content extraction — routing now handles
[{type:"text", text:"..."}]content format (was extracting empty string) - Session startup bias fix — never-downgrade logic: sessions can upgrade tiers but won't lock to the low-complexity startup message tier
- Session re-pins to fallback — after provider failure, session updates to the actual model that responded instead of retrying the failing primary every turn
/debugcommand — type/debug <prompt>to see routing diagnostics (tier, model, scores, session state) with zero API cost- Tool-calling model filter — requests with tool schemas skip incompatible models automatically
- Session persistence enabled by default —
deriveSessionId()hashes first user message; model stays pinned 30 min without client headers - baselineCost fix — hardcoded Opus 4.6 fallback pricing so savings metric always calculates correctly
- Tool call leaking fix — removed
grok-code-fast-1from all routing paths (was outputting tool invocations as plain text) - Systematic tool-calling guard —
toolCallingflag on models; incompatible models filtered from fallback chains - Async plugin fix —
register()made synchronous; OpenClaw was silently skipping initialization
- Agentic mode false trigger —
agenticScorenow scores user prompt only, not system prompt. Coding assistant system prompts no longer force all requests to Sonnet.
- OpenClaw tool API contract — fixed
inputSchema→parameters,execute(args)→execute(toolCallId, params), and return format
- Partner tool trigger reliability — directive tool description so AI calls the tool instead of answering from memory
- Baseline cost fix —
BASELINE_MODEL_IDcorrected fromclaude-opus-4-5toclaude-opus-4.6 - Wallet corruption safety — corrupted wallet files throw with recovery instructions instead of silently generating new wallet
- 9-language router — added ES, PT, KO, AR keywords across all 12 scoring dimensions (was 5 languages)
- Claude 4.6 — all Claude models updated to newest Sonnet 4.6 / Opus 4.6
- 7 new models — total 41 (Gemini 3.1 Pro Preview, Gemini 2.5 Flash Lite, o1, o1-mini, gpt-4.1-nano, grok-2-vision)
- 5 pricing fixes — 15-30% better routing from corrected model costs
- 67% cheaper ECO tier — Flash Lite for MEDIUM/COMPLEX