Skip to content

Reasoning skipped when using custom provider - API key availability check doesn't account for custom endpoints #426

@dguendisch

Description

@dguendisch

Bug Description

When using reasoningProvider: 'custom' (e.g. pointing to a local proxy like LiteLLM), the reasoning pipeline is silently skipped because isAvailable() only checks for API keys of built-in providers (OpenAI, Anthropic, Gemini, Groq, Local). The custom provider is never evaluated, so reasoning is always marked as unavailable.

This affects users who use a local model for transcription (Whisper/Parakeet) combined with a cloud model for intelligence via a custom endpoint/proxy.

Steps to Reproduce

  1. Set up a local STT model (e.g. Whisper)
  2. Configure reasoning with provider custom, pointing to a proxy (e.g. LiteLLM)
  3. Set a reasoning model (e.g. gemini-2.5-flash)
  4. Perform a transcription

Expected: Transcribed text is processed by the reasoning model (filler word removal, etc.)
Actual: Reasoning is silently skipped. No call is made to the custom endpoint.

Debug Logs

[DEBUG][reasoning][renderer] REASONING_STORAGE_CHECK { useReasoning: true }
[DEBUG][reasoning][renderer] API_KEY_CHECK {
hasOpenAI: false,
hasAnthropic: false,
hasGemini: false,
hasGroq: false,
hasLocal: false
}
[DEBUG][reasoning][renderer] REASONING_AVAILABILITY { isAvailable: false, reasoningEnabled: true, finalDecision: false }
[DEBUG][reasoning][renderer] REASONING_CHECK {
useReasoning: false,
reasoningModel: 'gemini-2.5-flash',
reasoningProvider: 'custom',
agentName: 'OpenWhispr'
}

Root Cause

isReasoningAvailable() (in audioManager.js) calls ReasoningService.isAvailable(), which only checks for the presence of built-in provider API keys. The custom provider is not considered, so the check returns false even when the custom endpoint is fully configured.

Suggested Fix

isAvailable() should return true when reasoningProvider === 'custom' and a custom endpoint URL is configured.

Workaround

Set a dummy API key for any built-in provider (e.g. paste dummy into the OpenAI key field) to pass the gate check. The actual reasoning call still routes correctly through the custom provider.

I have verified that adding a dummy OpenAI key makes the custom provider reasoning work as expected.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions