Skip to content

Releases: link-assistant/agent

[js] 0.13.2

14 Feb 21:07

Choose a tag to compare

Fix ProviderModelNotFoundError for newly added models like kimi-k2.5-free

When the models.dev cache was missing or stale, the refresh() call was not awaited,
causing the agent to use outdated/empty cache data. This led to ProviderModelNotFoundError
for models that exist in the remote API but weren't in the local cache.

The fix ensures that:

  • When no cache exists (first run): await refresh() before proceeding
  • When cache is stale (> 1 hour old): await refresh() to get updated model list
  • When cache is fresh: trigger background refresh but use cached data immediately

Fixes #175

Related Pull Request: #176


npm version

[js] 0.13.1

14 Feb 19:58

Choose a tag to compare

Fix indefinite hang when using Kilo provider by adding timeout to BunProc.run (#173)

  • Add DEFAULT_TIMEOUT_MS (2 minutes) for subprocess commands
  • Add INSTALL_TIMEOUT_MS (60 seconds) for package installation
  • Create TimeoutError for better error handling and retry logic
  • Add retry logic for timeout errors (up to 3 attempts)
  • Add helpful error messages for timeout and recovery scenarios

This prevents indefinite hangs caused by known Bun package manager issues:

Related Pull Request: #174


npm version

[js] 0.13.0

14 Feb 13:01

Choose a tag to compare

Fix Kilo provider integration: correct API endpoint, SDK, model IDs, and add device auth support (#171)

  • Fix base URL from /api/gateway to /api/openrouter
  • Switch SDK from @ai-sdk/openai-compatible to @openrouter/ai-sdk-provider
  • Fix all model ID mappings to match actual Kilo API identifiers
  • Add Kilo device auth plugin for agent auth login
  • Add required Kilo headers (User-Agent, X-KILOCODE-EDITORNAME)

Related Pull Request: #172


npm version

[js] 0.12.3

14 Feb 12:31

Choose a tag to compare

fix: Skip malformed SSE events instead of crashing (AI_JSONParseError)

When AI gateways (e.g. OpenCode Zen) corrupt SSE stream chunks when proxying
provider responses (e.g. Kimi K2.5), the Vercel AI SDK emits an error event
with AI_JSONParseError but continues the stream. Previously, the processor
threw on all error events, terminating the session.

Now, following the OpenAI Codex approach (skip-and-continue), the processor
detects JSONParseError in stream error events, logs a warning, and continues
processing subsequent valid chunks. This prevents a single corrupted SSE event
from terminating an entire session.

  • Skip JSONParseError in processor.ts stream error handler (Codex approach)
  • Remove StreamParseError retry infrastructure (skip, don't retry)
  • Add case study with comparison of 4 CLI agents (Codex, Gemini, Qwen, OpenCode)
  • Filed upstream issues: vercel/ai#12595, anomalyco/opencode#13579

Fixes #169

Related Pull Request: #170


npm version

[js] 0.9.0

13 Feb 11:55

Choose a tag to compare

feat(google): Improve Google AI subscription support via Cloud Code API

Implements proper Google AI subscription authentication with the following improvements:

  • Add user onboarding flow (loadCodeAssist + onboardUser) for automatic tier provisioning
  • Add alt=sse query parameter for streaming requests (matching Gemini CLI behavior)
  • Add thoughtSignature injection for Gemini 3+ function calls to prevent 400 errors
  • Add retry logic with exponential backoff for transient 429/503 errors
  • Add project context caching to avoid repeated onboarding API calls
  • Support configurable Cloud Code API endpoint via CODE_ASSIST_ENDPOINT env var
  • Use dynamic package version in x-goog-api-client header
  • Add comprehensive case study analysis for issue #102

These changes align the implementation with the official Gemini CLI and opencode-gemini-auth plugin,
enabling reliable subscription-based access without requiring API keys.

Related Pull Request: #103


npm version

[js] 0.8.22

13 Feb 10:05

Choose a tag to compare

Changed default model from opencode/gpt-5-nano to opencode/kimi-k2.5-free

Updated free models list in order of recommendation:

  1. kimi-k2.5-free (best recommended - new default)
  2. minimax-m2.1-free
  3. gpt-5-nano
  4. glm-4.7-free
  5. big-pickle

Added deprecation warning for grok-code model which is no longer included as a free model in OpenCode Zen subscription.

Related Pull Request: #134


npm version

[js] 0.8.21

13 Feb 07:29

Choose a tag to compare

fix: make toModelMessage async for AI SDK 6.0 compatibility

The AI SDK 6.0 changed convertToModelMessages() from synchronous to asynchronous,
which caused "Spread syntax requires ...iterable[Symbol.iterator] to be a function"
errors when spreading the result.

Changes:

  • Make MessageV2.toModelMessage async and await convertToModelMessages
  • Update all callers in prompt.ts, compaction.ts, summary.ts to await

Fixes #155

Related Pull Request: #156


npm version

[js] 0.12.1

13 Feb 23:36

Choose a tag to compare

Fix explicit provider/model routing for Kilo provider

When users specify an explicit provider/model combination like kilo/glm-5-free, the system now correctly uses that provider instead of silently falling back to the default (opencode).

  • Add resolveShortModelName() to route short model names to providers

  • Add parseModelWithResolution() for model string parsing with resolution

  • Modify prompt.ts to throw error instead of falling back on explicit provider

  • Add getAlternativeProviders() for rate limit fallback on shared models

  • Document free model distribution between OpenCode and Kilo

  • ed7f9fc: fix: Time-based retry for rate limits at fetch level

Implement custom fetch wrapper to handle HTTP 429 (rate limit) responses at the HTTP layer,
ensuring the agent's time-based retry configuration is respected instead of the AI SDK's
fixed retry count (3 attempts).

Changes:

  • Add RetryFetch wrapper that intercepts 429 responses before AI SDK's internal retry
  • Parse retry-after and retry-after-ms headers from server responses
  • Use exponential backoff when no header is present (up to 20 minutes per retry)
  • Respect AGENT_RETRY_TIMEOUT (default: 7 weeks) as global timeout
  • Add AGENT_MIN_RETRY_INTERVAL (default: 30 seconds) to prevent rapid retry attempts
  • Retry network errors (socket/connection issues) with exponential backoff
  • Compose with existing custom fetch functions (OAuth, timeout wrappers)

This fixes the issue where the AI SDK exhausted its 3 retry attempts before the agent's
retry logic could wait for the server's retry-after period (e.g., 64 minutes).

Fixes #167

Related Pull Request: #166


npm version

[js] 0.12.0

13 Feb 21:54

Choose a tag to compare

Add Kilo Gateway provider with free models support

  • Add Kilo Gateway provider with 6 free models (GLM-5, GLM 4.7, Kimi K2.5, MiniMax M2.1, Giga Potato, Trinity Large Preview)
  • GLM-5 is the flagship free model with 202K context window
  • OpenAI-compatible API at https://api.kilo.ai/api/gateway
  • No API key required for free models
  • Add comprehensive documentation for Kilo Gateway usage

Related Pull Request: #160


npm version

[js] 0.11.0

13 Feb 21:23

Choose a tag to compare

Add --generate-title flag and enhanced retry logic with exponential backoff

  • Add --generate-title CLI option (disabled by default) to save tokens on title generation
  • Implement retry with exponential backoff up to 20 minutes per retry, 7 days total timeout
  • Add --retry-timeout option to configure maximum retry duration (default: 7 days)
  • Respect retry-after headers from API responses
  • Add jitter to prevent thundering herd on retries
  • Track retry state per error type (different errors reset the timer)

Related Pull Request: #158


npm version