fix: make initial model selection respect active provider env#306
fix: make initial model selection respect active provider env#306JithendraNara wants to merge 3 commits intoGitlawb:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR fixes cross-provider model selection by making getUserSpecifiedModelSetting() read the model environment variable that corresponds to the active API provider, preventing stale model env vars from unrelated providers from taking precedence (relates to issue #300).
Changes:
- Update
getUserSpecifiedModelSetting()to chooseGEMINI_MODEL/OPENAI_MODEL/ANTHROPIC_MODELbased ongetAPIProvider(). - Add regression tests covering stale
GEMINI_MODEL/OPENAI_MODELleakage across providers. - Include model-related tests in the existing
test:providerscript.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| src/utils/model/modelSelection.test.ts | Adds regression tests to ensure provider-aware model env precedence. |
| src/utils/model/model.ts | Makes env var model resolution provider-aware instead of fixed-order. |
| package.json | Expands test:provider to include src/utils/model/*.test.ts. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| function clearProviderAndModelEnv(): void { | ||
| delete process.env.ANTHROPIC_MODEL | ||
| delete process.env.GEMINI_MODEL | ||
| delete process.env.OPENAI_MODEL | ||
| delete process.env.CLAUDE_CODE_USE_GEMINI | ||
| delete process.env.CLAUDE_CODE_USE_GITHUB | ||
| delete process.env.CLAUDE_CODE_USE_OPENAI | ||
| } |
There was a problem hiding this comment.
clearProviderAndModelEnv() only clears Gemini/GitHub/OpenAI provider flags. If CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, or CLAUDE_CODE_USE_FOUNDRY are set in the runner environment, the "first-party" test may not actually execute the firstParty branch. Clear the remaining provider-selection env vars here (consistent with src/utils/model/providers.test.ts).
| test('openai provider ignores stale gemini model env when choosing the main loop model', () => { | ||
| clearProviderAndModelEnv() | ||
| process.env.CLAUDE_CODE_USE_OPENAI = '1' | ||
| process.env.OPENAI_MODEL = 'llama-3.3-70b-versatile' | ||
| process.env.GEMINI_MODEL = 'gemini-2.0-flash-exp' | ||
|
|
||
| expect(getUserSpecifiedModelSetting()).toBe('llama-3.3-70b-versatile') | ||
| }) | ||
|
|
||
| test('gemini provider ignores stale openai model env when choosing the main loop model', () => { | ||
| clearProviderAndModelEnv() | ||
| process.env.CLAUDE_CODE_USE_GEMINI = '1' | ||
| process.env.GEMINI_MODEL = 'gemini-2.5-flash' | ||
| process.env.OPENAI_MODEL = 'gpt-4o' | ||
|
|
||
| expect(getUserSpecifiedModelSetting()).toBe('gemini-2.5-flash') | ||
| }) |
There was a problem hiding this comment.
These test names say "choosing the main loop model", but the assertion is on getUserSpecifiedModelSetting() (a pre-allowlist/pre-default selection step). Consider renaming the tests (or asserting on getMainLoopModel() if that's what you intend) to keep the test intent precise.
|
Addressed the Copilot review points in the latest push.
Re-ran:
|
| CLAUDE_CODE_USE_BEDROCK: process.env.CLAUDE_CODE_USE_BEDROCK, | ||
| CLAUDE_CODE_USE_FOUNDRY: process.env.CLAUDE_CODE_USE_FOUNDRY, |
There was a problem hiding this comment.
we are moving away from CLAUDE can we rename this constants to OPEN_CLAUDE ?
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Looks good to merge from my side.
This targets the right root cause for #300: model selection now follows the active provider instead of letting stale env vars from another provider win. The new regression coverage around OPENAI_MODEL vs GEMINI_MODEL is solid, and pulling the model-selection tests into the provider test path makes sense.
I reran focused validation here with:
bun test ./src/utils/model/modelSelection.test.tsbun test ./src/services/api/codexShim.test.ts ./src/utils/model/modelSelection.test.ts ./src/utils/model/providers.test.tscmd /c "set USER_TYPE=& bun run smoke"cmd /c "set USER_TYPE=& npm run test:provider-recommendation"
|
please fix conflicts |
e1e78a0 to
edeec0c
Compare
|
Rebased onto the latest Re-ran on the rebased branch:
GitHub now reports the PR as mergeable again; waiting on review/checks to refresh on the new head commit. |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Read the model env var that matches the active provider to prevent | ||
| // cross-provider leaks (e.g. ANTHROPIC_MODEL sent to the OpenAI API). | ||
| const provider = getAPIProvider() | ||
| specifiedModel = | ||
| (provider === 'gemini' ? process.env.GEMINI_MODEL : undefined) || | ||
| (provider === 'openai' || provider === 'gemini' || provider === 'github' | ||
| (provider === 'openai' || provider === 'codex' || provider === 'github' | ||
| ? process.env.OPENAI_MODEL | ||
| : undefined) || | ||
| (provider === 'firstParty' ? process.env.ANTHROPIC_MODEL : undefined) || | ||
| process.env.ANTHROPIC_MODEL || | ||
| settings.model || |
There was a problem hiding this comment.
getUserSpecifiedModelSetting() currently falls back to process.env.ANTHROPIC_MODEL unconditionally. That can still cause cross-provider leakage when the active provider is openai/codex/github/gemini and the provider-specific model env var is unset, contradicting the inline comment about preventing ANTHROPIC_MODEL being sent to the OpenAI API. Consider only reading ANTHROPIC_MODEL when the active provider is Anthropic-backed (e.g. firstParty and any other providers that legitimately use ANTHROPIC_MODEL), and otherwise skip it so the function falls back to settings/defaults.
| test('getUserSpecifiedModelSetting prefers OPENAI_MODEL for openai provider over stale GEMINI_MODEL', () => { | ||
| clearProviderAndModelEnv() | ||
| process.env.CLAUDE_CODE_USE_OPENAI = '1' | ||
| process.env.OPENAI_MODEL = 'llama-3.3-70b-versatile' | ||
| process.env.GEMINI_MODEL = 'gemini-2.0-flash-exp' | ||
|
|
||
| expect(getUserSpecifiedModelSetting()).toBe('llama-3.3-70b-versatile') | ||
| }) |
There was a problem hiding this comment.
The getUserSpecifiedModelSetting() logic now treats codex as an OpenAI-compatible provider (reads OPENAI_MODEL), but the regression tests here only cover openai, gemini, and firstParty. Adding a codex case (e.g., CLAUDE_CODE_USE_OPENAI=1 with a Codex-detected OPENAI_MODEL) would lock in the intended behavior and prevent future regressions for that provider branch.
Summary
getUserSpecifiedModelSetting()choose the model env var for the active provider instead of checking unrelated provider env vars firstGEMINI_MODEL/OPENAI_MODELcross-provider leakageImpact
OPENAI_MODELandGEMINI_MODELare present in the environmentRoot Cause
getUserSpecifiedModelSetting()insrc/utils/model/model.tspreviously resolved model env vars in a fixed order:ANTHROPIC_MODELGEMINI_MODELOPENAI_MODELThat ignored the active provider and allowed stale env vars from a different provider to win.
A concrete failure case was:
CLAUDE_CODE_USE_OPENAI=1OPENAI_MODEL=llama-3.3-70b-versatileGEMINI_MODEL=gemini-2.0-flash-expIn that state, the runtime could still resolve
gemini-2.0-flash-expeven though the UI/provider path was OpenAI-compatible.Fix
Make model selection provider-aware:
GEMINI_MODELOPENAI_MODELANTHROPIC_MODELThis keeps model resolution aligned with the active provider instead of unrelated leftover env vars.
Testing
bun test src/utils/model/modelSelection.test.tsenv -u USER_TYPE bun run test:providerenv -u USER_TYPE bun run smokeenv -u USER_TYPE npm run test:provider-recommendationNotes
Relates to #300