Skip to content

Fix local provider API key, captain's log display, and caching issues#11

Open
Brainstem2000 wants to merge 3 commits intoSpaceMolt:mainfrom
Brainstem2000:pr/fix-local-provider-apikey
Open

Fix local provider API key, captain's log display, and caching issues#11
Brainstem2000 wants to merge 3 commits intoSpaceMolt:mainfrom
Brainstem2000:pr/fix-local-provider-apikey

Conversation

@Brainstem2000
Copy link
Contributor

@Brainstem2000 Brainstem2000 commented Mar 13, 2026

Summary

  • Fix local provider API key: resolveApiKey() now returns 'local' placeholder for providers in CUSTOM_BASE_URLS (ollama, lmstudio, vllm, etc.) or with custom base_url in the DB. Previously returned undefined, causing pi-ai to fall through to process.env.OPENAI_API_KEY and throw "OpenAI API key is required".
  • Cache-first OpenAPI spec: fetchOpenApiSpec() now checks fresh cache (1h TTL) before making network requests. Prevents 429 rate-limiting when multiple agents start simultaneously.
  • Fix captain's log display: SidePane now reads structuredContent (JSON) instead of result (text summary) for captain's log — MCP v2 returns these as separate fields. Without this, the log pane shows "No log entries" because result is now a string like "Captain's log entry 0 of 20:".
  • Silent UI commands: Added silent option to executeCommand() so UI-driven captain's log queries (20 per profile) don't spam the activity log.
  • HTML cache busting: Production builds now set Cache-Control: no-cache, no-store, must-revalidate on index.html so browser always picks up new content-hashed asset filenames after rebuilds.

Test plan

  • Configure an agent with Ollama or another local provider — verify it connects and makes LLM calls without "OpenAI API key is required" error
  • Start multiple agents simultaneously — verify no 429 errors on OpenAPI spec endpoint
  • Connect an agent with captain's log entries — verify they display in the Log section of the SidePane
  • Check activity log (LogPane) — verify no captains_log_list spam entries
  • Rebuild and reload — verify browser picks up new JS bundle without needing hard refresh

🤖 Generated with Claude Code

Brainstem2000 and others added 2 commits March 13, 2026 14:04
Two fixes for multi-agent setups:

1. resolveApiKey(): new export that safely resolves API keys for any
   provider. Local providers (ollama, lmstudio, vllm, etc.) get a
   'local' placeholder so pi-ai doesn't fall through to checking
   process.env.OPENAI_API_KEY, which throws "OpenAI API key is
   required" for users who only have local providers configured.

2. fetchOpenApiSpec(): check fresh cache (1h TTL) before hitting the
   server. Previously always fetched first, causing 429 rate-limits
   when multiple agents started simultaneously. Now only fetches when
   cache is stale or missing, with stale cache as last-resort fallback.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- SidePane reads structuredContent (JSON) instead of result (text
  summary) for captain's log — MCP v2 returns these as separate fields
- Added silent option to executeCommand so UI-driven captain's log
  queries don't pollute the activity log (20 calls per profile)
- Set Cache-Control: no-cache on index.html so browser always picks
  up new content-hashed asset filenames after rebuilds

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Brainstem2000 Brainstem2000 changed the title Fix local provider API key resolution + cache-first OpenAPI spec Fix local provider API key, captain's log display, and caching issues Mar 13, 2026
- Agents on MCP v2 were receiving text summaries (e.g. "Captain's log
  entry 0 of 20:") instead of actual JSON data because executeTool
  used resp.result instead of resp.structuredContent. Now prefers
  structuredContent when available.

- Added 8-second cooldown between action commands within a turn to
  prevent spam loops when game returns "Action pending". Query commands
  are exempt. Strips spacemolt_ prefix for correct lookup.

- Split llm_call into its own "Call" filter checkbox in LogPane so
  model/token metadata can be toggled independently from LLM thoughts.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant