Skip to content

fix: make initial model selection respect active provider env#306

Open
JithendraNara wants to merge 3 commits intoGitlawb:mainfrom
JithendraNara:codex/fix-provider-model-env
Open

fix: make initial model selection respect active provider env#306
JithendraNara wants to merge 3 commits intoGitlawb:mainfrom
JithendraNara:codex/fix-provider-model-env

Conversation

@JithendraNara
Copy link
Copy Markdown

Summary

  • make getUserSpecifiedModelSetting() choose the model env var for the active provider instead of checking unrelated provider env vars first
  • add regression tests for stale GEMINI_MODEL / OPENAI_MODEL cross-provider leakage
  • include those model-selection tests in the existing provider test suite

Impact

  • user-facing impact: OpenAI-compatible sessions no longer pick a stale Gemini model when both OPENAI_MODEL and GEMINI_MODEL are present in the environment
  • developer/maintainer impact: provider env precedence is now covered by the normal provider test path, reducing the chance of future regressions

Root Cause

getUserSpecifiedModelSetting() in src/utils/model/model.ts previously resolved model env vars in a fixed order:

  • ANTHROPIC_MODEL
  • GEMINI_MODEL
  • OPENAI_MODEL

That ignored the active provider and allowed stale env vars from a different provider to win.

A concrete failure case was:

  • CLAUDE_CODE_USE_OPENAI=1
  • OPENAI_MODEL=llama-3.3-70b-versatile
  • stale GEMINI_MODEL=gemini-2.0-flash-exp

In that state, the runtime could still resolve gemini-2.0-flash-exp even though the UI/provider path was OpenAI-compatible.

Fix

Make model selection provider-aware:

  • Gemini provider reads GEMINI_MODEL
  • OpenAI/Codex/GitHub providers read OPENAI_MODEL
  • first-party Anthropic reads ANTHROPIC_MODEL

This keeps model resolution aligned with the active provider instead of unrelated leftover env vars.

Testing

  • bun test src/utils/model/modelSelection.test.ts
  • env -u USER_TYPE bun run test:provider
  • env -u USER_TYPE bun run smoke
  • env -u USER_TYPE npm run test:provider-recommendation

Notes

  • provider/model path tested: OpenAI-compatible, Gemini, and first-party env precedence through the model-selection regression tests
  • screenshots attached: n/a
  • follow-up work or known limitations: there is a separate interactive startup issue where the splash screen can appear without the REPL prompt in some directories; that is intentionally not included in this PR to keep scope focused

Relates to #300

Copilot AI review requested due to automatic review settings April 4, 2026 02:16
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes cross-provider model selection by making getUserSpecifiedModelSetting() read the model environment variable that corresponds to the active API provider, preventing stale model env vars from unrelated providers from taking precedence (relates to issue #300).

Changes:

  • Update getUserSpecifiedModelSetting() to choose GEMINI_MODEL / OPENAI_MODEL / ANTHROPIC_MODEL based on getAPIProvider().
  • Add regression tests covering stale GEMINI_MODEL/OPENAI_MODEL leakage across providers.
  • Include model-related tests in the existing test:provider script.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
src/utils/model/modelSelection.test.ts Adds regression tests to ensure provider-aware model env precedence.
src/utils/model/model.ts Makes env var model resolution provider-aware instead of fixed-order.
package.json Expands test:provider to include src/utils/model/*.test.ts.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +26 to +33
function clearProviderAndModelEnv(): void {
delete process.env.ANTHROPIC_MODEL
delete process.env.GEMINI_MODEL
delete process.env.OPENAI_MODEL
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.CLAUDE_CODE_USE_OPENAI
}
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clearProviderAndModelEnv() only clears Gemini/GitHub/OpenAI provider flags. If CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, or CLAUDE_CODE_USE_FOUNDRY are set in the runner environment, the "first-party" test may not actually execute the firstParty branch. Clear the remaining provider-selection env vars here (consistent with src/utils/model/providers.test.ts).

Copilot uses AI. Check for mistakes.
Comment thread src/utils/model/modelSelection.test.ts Outdated
Comment on lines +45 to +61
test('openai provider ignores stale gemini model env when choosing the main loop model', () => {
clearProviderAndModelEnv()
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_MODEL = 'llama-3.3-70b-versatile'
process.env.GEMINI_MODEL = 'gemini-2.0-flash-exp'

expect(getUserSpecifiedModelSetting()).toBe('llama-3.3-70b-versatile')
})

test('gemini provider ignores stale openai model env when choosing the main loop model', () => {
clearProviderAndModelEnv()
process.env.CLAUDE_CODE_USE_GEMINI = '1'
process.env.GEMINI_MODEL = 'gemini-2.5-flash'
process.env.OPENAI_MODEL = 'gpt-4o'

expect(getUserSpecifiedModelSetting()).toBe('gemini-2.5-flash')
})
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These test names say "choosing the main loop model", but the assertion is on getUserSpecifiedModelSetting() (a pre-allowlist/pre-default selection step). Consider renaming the tests (or asserting on getMainLoopModel() if that's what you intend) to keep the test intent precise.

Copilot uses AI. Check for mistakes.
@JithendraNara
Copy link
Copy Markdown
Author

Addressed the Copilot review points in the latest push.

  • Expanded clearProviderAndModelEnv() to also clear CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, and CLAUDE_CODE_USE_FOUNDRY, so the first-party test stays isolated even if those flags are present in the runner environment.
  • Renamed the regression tests to describe the exact function under test: getUserSpecifiedModelSetting().

Re-ran:

  • env -u USER_TYPE bun test src/utils/model/modelSelection.test.ts
  • env -u USER_TYPE bun run test:provider

Comment thread src/utils/model/modelSelection.test.ts Outdated
Comment on lines +10 to +11
CLAUDE_CODE_USE_BEDROCK: process.env.CLAUDE_CODE_USE_BEDROCK,
CLAUDE_CODE_USE_FOUNDRY: process.env.CLAUDE_CODE_USE_FOUNDRY,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are moving away from CLAUDE can we rename this constants to OPEN_CLAUDE ?

Vasanthdev2004
Vasanthdev2004 previously approved these changes Apr 4, 2026
Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to merge from my side.

This targets the right root cause for #300: model selection now follows the active provider instead of letting stale env vars from another provider win. The new regression coverage around OPENAI_MODEL vs GEMINI_MODEL is solid, and pulling the model-selection tests into the provider test path makes sense.

I reran focused validation here with:

  • bun test ./src/utils/model/modelSelection.test.ts
  • bun test ./src/services/api/codexShim.test.ts ./src/utils/model/modelSelection.test.ts ./src/utils/model/providers.test.ts
  • cmd /c "set USER_TYPE=& bun run smoke"
  • cmd /c "set USER_TYPE=& npm run test:provider-recommendation"

@kevincodex1
Copy link
Copy Markdown
Contributor

please fix conflicts

Copilot AI review requested due to automatic review settings April 9, 2026 03:05
@JithendraNara JithendraNara force-pushed the codex/fix-provider-model-env branch from e1e78a0 to edeec0c Compare April 9, 2026 03:05
@JithendraNara
Copy link
Copy Markdown
Author

Rebased onto the latest main and resolved the merge conflicts.

Re-ran on the rebased branch:

  • env -u USER_TYPE bun test src/utils/model/modelSelection.test.ts
  • env -u USER_TYPE bun run test:provider
  • env -u USER_TYPE bun run smoke

GitHub now reports the PR as mergeable again; waiting on review/checks to refresh on the new head commit.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/utils/model/model.ts
Comment on lines 82 to 91
// Read the model env var that matches the active provider to prevent
// cross-provider leaks (e.g. ANTHROPIC_MODEL sent to the OpenAI API).
const provider = getAPIProvider()
specifiedModel =
(provider === 'gemini' ? process.env.GEMINI_MODEL : undefined) ||
(provider === 'openai' || provider === 'gemini' || provider === 'github'
(provider === 'openai' || provider === 'codex' || provider === 'github'
? process.env.OPENAI_MODEL
: undefined) ||
(provider === 'firstParty' ? process.env.ANTHROPIC_MODEL : undefined) ||
process.env.ANTHROPIC_MODEL ||
settings.model ||
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getUserSpecifiedModelSetting() currently falls back to process.env.ANTHROPIC_MODEL unconditionally. That can still cause cross-provider leakage when the active provider is openai/codex/github/gemini and the provider-specific model env var is unset, contradicting the inline comment about preventing ANTHROPIC_MODEL being sent to the OpenAI API. Consider only reading ANTHROPIC_MODEL when the active provider is Anthropic-backed (e.g. firstParty and any other providers that legitimately use ANTHROPIC_MODEL), and otherwise skip it so the function falls back to settings/defaults.

Copilot uses AI. Check for mistakes.
Comment on lines +63 to +70
test('getUserSpecifiedModelSetting prefers OPENAI_MODEL for openai provider over stale GEMINI_MODEL', () => {
clearProviderAndModelEnv()
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_MODEL = 'llama-3.3-70b-versatile'
process.env.GEMINI_MODEL = 'gemini-2.0-flash-exp'

expect(getUserSpecifiedModelSetting()).toBe('llama-3.3-70b-versatile')
})
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The getUserSpecifiedModelSetting() logic now treats codex as an OpenAI-compatible provider (reads OPENAI_MODEL), but the regression tests here only cover openai, gemini, and firstParty. Adding a codex case (e.g., CLAUDE_CODE_USE_OPENAI=1 with a Codex-detected OPENAI_MODEL) would lock in the intended behavior and prevent future regressions for that provider branch.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants