-
Notifications
You must be signed in to change notification settings - Fork 0
Ollama failover UX — show which local model was attempted before switching #10
Copy link
Copy link
Open
Labels
bugSomething isn't workingSomething isn't workingenhancementNew feature or requestNew feature or request
Description
Current behaviour
When Ollama is configured but the requested model isn't loaded, the agent silently fails (returns < 20 chars of output) and switches to the next available provider. The user sees ↻ Provider unavailable but doesn't know which Ollama model was attempted or why it failed.
Expected behaviour
The failover notice should be more descriptive:
↻ Ollama — llama3.2 not found locally (is it pulled?)
Switched to Anthropic / claude-haiku-4-5
And in the agent card, show a tooltip or expandable section with:
- Which model was attempted
- The error received (e.g.
model not found,connection refused) - A hint:
Run: ollama pull llama3.2
Root cause
In multiAgentEngine.ts, streamWithFailover() catches the error and sets the notice string but discards the original error message. The Ollama-specific error (model not found) is lost.
Fix
- In
streamOllama(), throw a descriptive error:throw new Error(Ollama: model '${modelId}' not found — run: ollama pull ${modelId}) - In
streamWithFailover(), include the caught error message in the↻notice string - Surface the full message in the agent card output
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingenhancementNew feature or requestNew feature or request