Skip to content

fix: replace discontinued gemini preview model#399

Closed
Miyoko076 wants to merge 3 commits intoGitlawb:mainfrom
Miyoko076:main
Closed

fix: replace discontinued gemini preview model#399
Miyoko076 wants to merge 3 commits intoGitlawb:mainfrom
Miyoko076:main

Conversation

@Miyoko076
Copy link
Copy Markdown

@Miyoko076 Miyoko076 commented Apr 5, 2026

Fixes #398

The gemini-2.5-pro-preview-03-25 model has been discontinued. Replaced the opus mapping with the stable gemini-2.5-pro version to ensure continued functionality.

Reference: https://ai.google.dev/gemini-api/docs/deprecations#gemini-2.5-pro-models

Summary

  • what changed: Updated the opus tier mapping in GEMINI_MODEL_DEFAULTS (configs.ts) to use gemini-2.5-pro.
  • why it changed: The previous preview model reached its end-of-life and was discontinued, which would cause API request failures.

Impact

  • user-facing impact: Prevents API errors when users select the Gemini provider with the opus tier. Ensures stable model generation.
  • developer/maintainer impact: Removes reliance on a deprecated preview model, improving the stability of default configurations.

Testing

  • bun run build
  • bun run smoke
  • bun test ./src/utils/model/providerModelDefaults.test.ts

Notes

  • provider/model path tested: Gemini Provider
  • screenshots attached (if UI changed): N/A
  • follow-up work or known limitations: None

kevincodex1
kevincodex1 previously approved these changes Apr 6, 2026
Copy link
Copy Markdown
Collaborator

@gnanam1990 gnanam1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this. I don't think the discontinued Gemini default is fully removed yet.

The PR updates the model config, but there is still a hardcoded gemini-2.5-pro-preview-03-25 default path in src/utils/model/model.ts, and that path is still used by runtime model selection. That means users can still hit the discontinued model through the default-opus flow.

Please update the remaining runtime default path as well and add a focused test that covers actual model selection, not just config values.

@kali113
Copy link
Copy Markdown

kali113 commented Apr 6, 2026

Proposed changes to address review feedback

Here is a patch addressing the open review comments:

1. Update runtime default in model.ts:128

-    return process.env.GEMINI_MODEL || 'gemini-2.5-pro-preview-03-25'
+    return process.env.GEMINI_MODEL || 'gemini-2.5-pro'

2. Add focused tests in src/utils/model/providerModelDefaults.test.ts

  • Tests getDefaultOpusModel() returns gemini-2.5-pro (not discontinued preview)
  • Tests getDefaultSonnetModel() returns gemini-2.0-flash
  • Tests getDefaultHaikuModel() returns gemini-2.0-flash-lite
  • Verifies no discontinued preview-03-25 string appears in any default
  • Tests explicit GEMINI_MODEL override is respected

@kali113
Copy link
Copy Markdown

kali113 commented Apr 6, 2026

Here is the full test file ready to copy into src/utils/model/providerModelDefaults.test.ts:

providerModelDefaults.test.ts
import { afterEach, expect, test } from 'bun:test'

import {
  getDefaultHaikuModel,
  getDefaultOpusModel,
  getDefaultSonnetModel,
} from './model.js'

const originalEnv = {
  ANTHROPIC_DEFAULT_OPUS_MODEL: process.env.ANTHROPIC_DEFAULT_OPUS_MODEL,
  ANTHROPIC_DEFAULT_SONNET_MODEL: process.env.ANTHROPIC_DEFAULT_SONNET_MODEL,
  ANTHROPIC_DEFAULT_HAIKU_MODEL: process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL,
  ANTHROPIC_MODEL: process.env.ANTHROPIC_MODEL,
  GEMINI_MODEL: process.env.GEMINI_MODEL,
  OPENAI_MODEL: process.env.OPENAI_MODEL,
  CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
  CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
  CLAUDE_CODE_USE_FIRST_PARTY: process.env.CLAUDE_CODE_USE_FIRST_PARTY,
}

function clearProviderEnv(): void {
  delete process.env.CLAUDE_CODE_USE_GEMINI
  delete process.env.CLAUDE_CODE_USE_OPENAI
  delete process.env.CLAUDE_CODE_USE_FIRST_PARTY
}

function clearModelEnv(): void {
  delete process.env.ANTHROPIC_DEFAULT_OPUS_MODEL
  delete process.env.ANTHROPIC_DEFAULT_SONNET_MODEL
  delete process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL
  delete process.env.ANTHROPIC_MODEL
  delete process.env.GEMINI_MODEL
  delete process.env.OPENAI_MODEL
}

afterEach(() => {
  process.env.ANTHROPIC_DEFAULT_OPUS_MODEL = originalEnv.ANTHROPIC_DEFAULT_OPUS_MODEL
  process.env.ANTHROPIC_DEFAULT_SONNET_MODEL = originalEnv.ANTHROPIC_DEFAULT_SONNET_MODEL
  process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL = originalEnv.ANTHROPIC_DEFAULT_HAIKU_MODEL
  process.env.ANTHROPIC_MODEL = originalEnv.ANTHROPIC_MODEL
  process.env.GEMINI_MODEL = originalEnv.GEMINI_MODEL
  process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
  process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
  process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
  process.env.CLAUDE_CODE_USE_FIRST_PARTY = originalEnv.CLAUDE_CODE_USE_FIRST_PARTY
})

test('Gemini provider opus default uses stable gemini-2.5-pro (not discontinued preview)', () => {
  clearProviderEnv()
  clearModelEnv()
  process.env.CLAUDE_CODE_USE_GEMINI = '1'
  expect(getDefaultOpusModel()).toBe('gemini-2.5-pro')
})

test('Gemini provider does not reference discontinued preview model', () => {
  clearProviderEnv()
  clearModelEnv()
  process.env.CLAUDE_CODE_USE_GEMINI = '1'
  const opus = getDefaultOpusModel()
  const sonnet = getDefaultSonnetModel()
  const haiku = getDefaultHaikuModel()
  expect(opus).not.toContain('preview-03-25')
  expect(sonnet).not.toContain('preview-03-25')
  expect(haiku).not.toContain('preview-03-25')
})

test('Gemini provider sonnet default is gemini-2.0-flash', () => {
  clearProviderEnv()
  clearModelEnv()
  process.env.CLAUDE_CODE_USE_GEMINI = '1'
  expect(getDefaultSonnetModel()).toBe('gemini-2.0-flash')
})

test('Gemini provider haiku default is gemini-2.0-flash-lite', () => {
  clearProviderEnv()
  clearModelEnv()
  process.env.CLAUDE_CODE_USE_GEMINI = '1'
  expect(getDefaultHaikuModel()).toBe('gemini-2.0-flash-lite')
})

test('Gemini provider respects explicit GEMINI_MODEL override for opus', () => {
  clearProviderEnv()
  clearModelEnv()
  process.env.CLAUDE_CODE_USE_GEMINI = '1'
  process.env.GEMINI_MODEL = 'gemini-2.5-flash'
  expect(getDefaultOpusModel()).toBe('gemini-2.5-flash')
})

@kali113
Copy link
Copy Markdown

kali113 commented Apr 6, 2026

@gnanam1990 what do you think?

@kevincodex1 kevincodex1 requested a review from gnanam1990 April 6, 2026 12:47
Copy link
Copy Markdown
Collaborator

@gnanam1990 gnanam1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Miyoko076, thank you for catching this and putting up the fix! The deprecated preview model definitely needs to go. Here are my findings after reviewing the full codebase:


Critical: Missed runtime fallback in model.ts

The config updates in configs.ts look correct, but there is still a hardcoded reference to the discontinued model in the runtime code path that actually serves the model string to Gemini provider users:

src/utils/model/model.ts, line 130:

return process.env.GEMINI_MODEL || 'gemini-2.5-pro-preview-03-25'

This is inside getDefaultOpusModel(), which is called by the opus alias, opusplan, Max/Team Premium defaults, and getBestModel(). Since this function does not read from configs.ts — it uses its own inline string — the config changes alone won't fix the runtime behavior for Gemini provider users without a GEMINI_MODEL env var set.

Fix: Change 'gemini-2.5-pro-preview-03-25' to 'gemini-2.5-pro' on that line.


Important: GEMINI_MODEL_DEFAULTS appears to be unused

The GEMINI_MODEL_DEFAULTS constant in configs.ts (lines 22–26) is exported but never imported anywhere in the codebase. The PR updates it (which is fine for consistency), but it might be worth noting this is effectively dead code. If it's intended as documentation, that's okay — just something to be aware of.


Suggestion: Centralize the Gemini Opus default

The same model string currently lives in two independent locations (configs.ts and model.ts:130), which is exactly how they fell out of sync in the first place. Having getDefaultOpusModel() read from GEMINI_MODEL_DEFAULTS.opus (or a shared constant) would prevent this class of bug from recurring. This is optional for this PR but would be a nice follow-up.


Suggestion: Add a focused test

There are currently no tests verifying what getDefaultOpusModel() returns when the Gemini provider is active. A simple test like the one proposed in the comments would have caught this automatically. The test file shared in the comments looks solid — would be great to include it.


Summary

What Status
configs.ts updates (5 places) Looks good
model.ts:130 runtime fallback Still has discontinued model — needs fix
Tests Missing — would be nice to add

The PR is very close! Just the one-line fix in model.ts is the blocker. Thank you again for working on this — really appreciate the effort to keep the defaults up to date.

@Miyoko076
Copy link
Copy Markdown
Author

Thanks for the thorough review @gnanam1990! I've updated the PR to address your feedback:

  • Updated model.ts to fix the runtime
  • Centralized all external defaults in configs.ts and model.ts to resolve the unused constant.

@Miyoko076 Miyoko076 requested a review from gnanam1990 April 6, 2026 20:45
@Miyoko076

This comment was marked as outdated.

@Miyoko076 Miyoko076 marked this pull request as draft April 6, 2026 22:42
@Miyoko076
Copy link
Copy Markdown
Author

Added providerModelDefaults.test.ts

@Miyoko076
Copy link
Copy Markdown
Author

Testing

  • bun run build
  • bun run smoke
  • bun test ./src/utils/model/providerModelDefaults.test.ts

C:\Users\Miyoko\Desktop\openclaude>bun test ./src/utils/model/providerModelDefaults.test.ts
bun test v1.3.11 (af24e281)

src\utils\model\providerModelDefaults.test.ts:
✓ Gemini provider loads expected default models (Fallback logic check)
✓ Gemini provider does not reference discontinued preview model
✓ Gemini provider correctly applies GEMINI_MODEL environment override
✓ Gemini provider main loop setting defaults to sonnet
✓ Gemini provider main loop setting respects GEMINI_MODEL override
✓ OpenAI provider loads expected default models (Fallback logic check)
✓ OpenAI provider correctly applies OPENAI_MODEL environment override
✓ OpenAI provider main loop setting defaults to opus
✓ OpenAI provider main loop setting respects OPENAI_MODEL override
✓ Codex provider loads expected default models (Fallback logic check)
✓ Codex provider correctly applies OPENAI_MODEL environment override
✓ Codex provider main loop setting defaults to opus
✓ Codex provider main loop setting respects OPENAI_MODEL override

 13 pass
 0 fail
 27 expect() calls
Ran 13 tests across 1 file. [257.00ms]

C:\Users\Miyoko\Desktop\openclaude>bun run build
$ bun run scripts/build.ts
  🔇 no-telemetry: stubbed 21 modules
✓ Built openclaude v0.1.8 → dist/cli.mjs

C:\Users\Miyoko\Desktop\openclaude>bun run smoke
$ bun run build && node dist/cli.mjs --version
$ bun run scripts/build.ts
  🔇 no-telemetry: stubbed 21 modules
✓ Built openclaude v0.1.8 → dist/cli.mjs
0.1.8 (Open Claude)

@Miyoko076 Miyoko076 marked this pull request as ready for review April 6, 2026 22:57
@Miyoko076 Miyoko076 force-pushed the main branch 3 times, most recently from 7bfe465 to 97a1da2 Compare April 7, 2026 02:45
@Miyoko076
Copy link
Copy Markdown
Author

Fixed all nits. PTAL

gnanam1990
gnanam1990 previously approved these changes Apr 7, 2026
Copy link
Copy Markdown
Collaborator

@gnanam1990 gnanam1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing this. I rechecked the latest head and the original blocker is resolved:

  • the runtime Gemini fallback now uses gemini-2.5-pro
  • the external defaults are centralized consistently
  • focused coverage was added in src/utils/model/providerModelDefaults.test.ts

I did not find any new blocking regression in the current head. This looks good to me.

@Miyoko076
Copy link
Copy Markdown
Author

@graham1990
Force-pushed to remove unused environment variables. No logic changes were made. Sorry to keep bothering you.

Diff command
C:\Users\Miyoko\Desktop\openclaude>git diff 97a1da2 7026dc9
diff --git a/src/utils/model/providerModelDefaults.test.ts b/src/utils/model/providerModelDefaults.test.ts
index 8fdda62..7453c33 100644
--- a/src/utils/model/providerModelDefaults.test.ts
+++ b/src/utils/model/providerModelDefaults.test.ts
@@ -17,17 +17,11 @@ const originalEnv = {
   OPENAI_MODEL: process.env.OPENAI_MODEL,
   CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
   CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
-  CLAUDE_CODE_USE_CODEX: process.env.CLAUDE_CODE_USE_CODEX,
-  CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
-  CLAUDE_CODE_USE_FIRST_PARTY: process.env.CLAUDE_CODE_USE_FIRST_PARTY,
 }

 function clearProviderEnv(): void {
   delete process.env.CLAUDE_CODE_USE_GEMINI
   delete process.env.CLAUDE_CODE_USE_OPENAI
-  delete process.env.CLAUDE_CODE_USE_CODEX
-  delete process.env.CLAUDE_CODE_USE_GITHUB
-  delete process.env.CLAUDE_CODE_USE_FIRST_PARTY
 }

 function clearModelEnv(): void {
@@ -48,9 +42,6 @@ afterEach(() => {
   process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
   process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
   process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
-  process.env.CLAUDE_CODE_USE_CODEX = originalEnv.CLAUDE_CODE_USE_CODEX
-  process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
-  process.env.CLAUDE_CODE_USE_FIRST_PARTY = originalEnv.CLAUDE_CODE_USE_FIRST_PARTY
 })
Re-test
 C:\Users\Miyoko\Desktop\openclaude>bun test C:\Users\Miyoko\Desktop\openclaude\src\utils\model\providerModelDefaults.test.ts
bun test v1.3.11 (af24e281)

src\utils\model\providerModelDefaults.test.ts:
✓ Gemini provider loads expected default models (Fallback logic check)
✓ Gemini provider does not reference discontinued preview model
✓ Gemini provider correctly applies GEMINI_MODEL environment override [15.00ms]
✓ Gemini provider main loop setting defaults to sonnet
✓ Gemini provider main loop setting respects GEMINI_MODEL override
✓ OpenAI provider loads expected default models (Fallback logic check)
✓ OpenAI provider correctly applies OPENAI_MODEL environment override
✓ OpenAI provider main loop setting defaults to opus
✓ OpenAI provider main loop setting respects OPENAI_MODEL override
✓ Codex provider loads expected default models (Fallback logic check)
✓ Codex provider correctly applies OPENAI_MODEL environment override
✓ Codex provider main loop setting defaults to opus
✓ Codex provider main loop setting respects OPENAI_MODEL override

 13 pass
 0 fail
 27 expect() calls
Ran 13 tests across 1 file. [782.00ms]

@kevincodex1 kevincodex1 requested a review from gnanam1990 April 7, 2026 09:01
@kevincodex1
Copy link
Copy Markdown
Contributor

checking if all test pass and will be merging this.

@kevincodex1
Copy link
Copy Markdown
Contributor

hello @Miyoko076 would love to include this in the next release. this looks good already please just fix the failing tests

@gnanam1990
Copy link
Copy Markdown
Collaborator

@Miyoko076

@Miyoko076
Copy link
Copy Markdown
Author

@kevincodex1
OK, I will check where does the failing occurs.

@Miyoko076
Copy link
Copy Markdown
Author

Miyoko076 commented Apr 7, 2026

for i in {1..10}; do env -i PATH="$PATH" bun test --randomize 2>&1 | sed -n '/tests failed:/,$p'; done

Here is a summary of the 10 randomized test runs.
(Note: One run was filtered out by the bash script.)

Run Seed Pass Fail Errors Total Tests
1 3422900647 526 11 0 537
2 4204651412 232 95 43 327
3 799752846 528 9 0 537
4 3168679273 532 5 0 537
5 3657557856 403 56 24 459
6 462867619 536 1 0 537
7 3526493823 526 11 0 537
8 3968004562 403 56 24 459
9 477758715 237 90 43 327
10 (Not logged) (Not logged) ? ? ?
bun test --randomize result on wsl2
miyoko@Legion7-16iax10:/mnt/c/Users/Miyoko/Desktop/openclaude$ for i in {1..10}; do env -i PATH="$PATH" bun test --randomize 2>&1 | sed -n '/tests failed:/,$p'; done
11 tests failed:
(fail) Gemini provider correctly applies GEMINI_MODEL environment override [2.44ms]
(fail) Codex provider main loop setting defaults to opus [0.75ms]
(fail) OpenAI provider main loop setting defaults to opus [0.83ms]
(fail) Codex provider main loop setting respects OPENAI_MODEL override [0.86ms]
(fail) Gemini provider loads expected default models (Fallback logic check) [0.96ms]
(fail) Gemini provider main loop setting respects GEMINI_MODEL override [0.94ms]
(fail) OpenAI provider loads expected default models (Fallback logic check) [1.08ms]
(fail) OpenAI provider main loop setting respects OPENAI_MODEL override [1.11ms]
(fail) Gemini provider main loop setting defaults to sonnet [1.07ms]
(fail) Windows clipboard fallback > uses PowerShell instead of clip.exe for local Windows copy [5.47ms]
(fail) clipboard path behavior remains stable > Windows clipboard fallback is skipped over SSH [3.21ms]

--seed=3422900647
526 pass
11 fail
788 expect() calls
Ran 537 tests across 80 files. [-3.04s]
95 tests failed:
(fail) checkDomainBlocklist > returns allowed without API call in OpenAI mode [2.07ms]
(fail) checkDomainBlocklist > returns allowed without API call in Gemini mode [1.67ms]
(fail) checkDomainBlocklist > calls Anthropic domain check in first-party mode [1.51ms]
(fail) saveGithubModelsToken / clearGithubModelsToken > save returns failure in bare mode [1.22ms]
(fail) saveGithubModelsToken / clearGithubModelsToken > clear succeeds in bare mode [1.00ms]
(fail) readGithubModelsToken > returns undefined in bare mode [1.13ms]
(fail) persistActiveProviderProfileModel > updates active profile model and current env for profile-managed sessions [2.75ms]
(fail) persistActiveProviderProfileModel > does not mutate process env when session is not profile-managed [2.26ms]
(fail) applyProviderProfileToProcessEnv > openai profile clears competing gemini/github flags [2.37ms]
(fail) applyProviderProfileToProcessEnv > anthropic profile clears competing gemini/github flags [2.41ms]
(fail) applyActiveProviderProfileFromConfig > applies active profile when no explicit provider is selected [2.34ms]
(fail) applyActiveProviderProfileFromConfig > does not re-apply active profile when flags conflict with current provider [2.31ms]
(fail) applyActiveProviderProfileFromConfig > re-applies active profile when profile-managed env drifts [2.58ms]
(fail) applyActiveProviderProfileFromConfig > does not override explicit startup selection when profile marker is stale [2.48ms]
(fail) applyActiveProviderProfileFromConfig > does not override explicit startup provider selection [2.39ms]
(fail) getProviderPresetDefaults > ollama preset defaults to a local Ollama model [2.45ms]
(fail) deleteProviderProfile > deleting final profile clears provider env when active profile applied it [2.39ms]
(fail) deleteProviderProfile > deleting final profile preserves explicit startup provider env [2.20ms]
(fail) loadConversationForResume rejects oversized transcripts before resume hooks run [27.05ms]
(fail) getRateLimitResetDelayMs - providers without reset headers > returns null for bedrock [1.88ms]
(fail) getRateLimitResetDelayMs - providers without reset headers > returns null for vertex [1.84ms]
(fail) parseOpenAIDuration > parses minutes only: "2m" → 120000 [1.74ms]
(fail) parseOpenAIDuration > parses minutes+seconds: "6m0s" → 360000 [1.70ms]
(fail) parseOpenAIDuration > parses milliseconds: "500ms" → 500 [1.67ms]
(fail) parseOpenAIDuration > parses hours+minutes+seconds: "1h30m0s" → 5400000 [1.98ms]
(fail) parseOpenAIDuration > parses seconds: "1s" → 1000 [2.35ms]
(fail) parseOpenAIDuration > returns null for empty string [1.65ms]
(fail) parseOpenAIDuration > returns null for unrecognized format [1.78ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > reads anthropic-ratelimit-unified-reset Unix timestamp [1.80ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > returns null when header absent [1.86ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > returns null when reset is in the past [1.77ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > reads x-ratelimit-reset-tokens and picks the larger delay [1.63ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > returns null when no openai rate limit headers present [2.03ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > reads x-ratelimit-reset-requests duration string [1.96ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > works for github provider too [1.99ms]
(fail) clipboard path behavior remains stable > local macOS clipboard fallback still uses pbcopy [2.22ms]
(fail) clipboard path behavior remains stable > getClipboardPath stays native on local macOS [2.42ms]
(fail) clipboard path behavior remains stable > getClipboardPath stays tmux-buffer when TMUX is set [5.31ms]
(fail) clipboard path behavior remains stable > Windows clipboard fallback is skipped over SSH [1.66ms]
(fail) Windows clipboard fallback > uses PowerShell instead of clip.exe for local Windows copy [1.45ms]
(fail) Windows clipboard fallback > passes Windows clipboard text through a UTF-8 temp file instead of stdin [1.37ms]
(fail) prefetchOfficialMcpUrls > fetches registry in first-party mode [1.08ms]
(fail) prefetchOfficialMcpUrls > does not fetch registry when using OpenAI mode [0.83ms]
(fail) prefetchOfficialMcpUrls > does not fetch registry when using Gemini mode [1.10ms]
(fail) hydrateGithubModelsTokenFromSecureStorage > does not override existing GITHUB_TOKEN [0.96ms]
(fail) hydrateGithubModelsTokenFromSecureStorage > sets GITHUB_TOKEN from secure storage when USE_GITHUB and env token empty [0.94ms]
(fail) opens the model picker without awaiting local model discovery refresh [4.06ms]
(fail) saveGeminiAccessToken stores and reads back the token [1.75ms]
(fail) clearGeminiAccessToken removes the stored token [0.96ms]
(fail) fastMode ant-only fallback cleanup > prefetchFastModeStatus network failure does not force-enable from USER_TYPE=ant [2.60ms]
(fail) fastMode ant-only fallback cleanup > resolveFastModeStatusFromCache does not force-enable from USER_TYPE=ant [1.57ms]
(fail) fastMode ant-only fallback cleanup > prefetchFastModeStatus without auth does not force-enable from USER_TYPE=ant [1.71ms]

--seed=4204651412
232 pass
95 fail
43 errors
259 expect() calls
Ran 327 tests across 80 files. [1.82s]
9 tests failed:
(fail) Gemini provider loads expected default models (Fallback logic check) [0.80ms]
(fail) OpenAI provider correctly applies OPENAI_MODEL environment override [0.64ms]
(fail) Gemini provider correctly applies GEMINI_MODEL environment override [0.73ms]
(fail) OpenAI provider main loop setting defaults to opus [0.74ms]
(fail) OpenAI provider loads expected default models (Fallback logic check) [0.78ms]
(fail) Gemini provider main loop setting defaults to sonnet [0.73ms]
(fail) Gemini provider main loop setting respects GEMINI_MODEL override [0.78ms]
(fail) OpenAI provider main loop setting respects OPENAI_MODEL override [0.74ms]
(fail) Windows clipboard fallback > uses PowerShell instead of clip.exe for local Windows copy [4.94ms]

--seed=799752846
528 pass
9 fail
786 expect() calls
Ran 537 tests across 80 files. [-3.34s]
5 tests failed:
(fail) Windows clipboard fallback > uses PowerShell instead of clip.exe for local Windows copy [4.32ms]
(fail) OpenAI provider correctly applies OPENAI_MODEL environment override [1.44ms]
(fail) OpenAI provider main loop setting defaults to opus [0.81ms]
(fail) OpenAI provider main loop setting respects OPENAI_MODEL override [0.79ms]
(fail) OpenAI provider loads expected default models (Fallback logic check) [0.75ms]

--seed=3168679273
532 pass
5 fail
790 expect() calls
Ran 537 tests across 80 files. [2.40s]
56 tests failed:
(fail) checkDomainBlocklist > returns allowed without API call in Gemini mode [940.57ms]
(fail) checkDomainBlocklist > calls Anthropic domain check in first-party mode [3.35ms]
(fail) checkDomainBlocklist > returns allowed without API call in OpenAI mode [1.64ms]
(fail) PromptInputQueuedCommands > shows a next-turn guidance banner for queued prompt messages [1.90ms]
(fail) loadConversationForResume rejects oversized transcripts before resume hooks run [27.87ms]
(fail) Windows clipboard fallback > uses PowerShell instead of clip.exe for local Windows copy [4.20ms]
(fail) clipboard path behavior remains stable > Windows clipboard fallback is skipped over SSH [2.76ms]
(fail) OpenAI provider main loop setting defaults to opus [0.83ms]
(fail) Gemini provider loads expected default models (Fallback logic check) [0.68ms]
(fail) OpenAI provider main loop setting respects OPENAI_MODEL override [0.60ms]
(fail) Gemini provider main loop setting defaults to sonnet [0.59ms]
(fail) Gemini provider correctly applies GEMINI_MODEL environment override [0.59ms]
(fail) Gemini provider main loop setting respects GEMINI_MODEL override [0.61ms]
(fail) Codex provider main loop setting defaults to opus [0.60ms]
(fail) Codex provider main loop setting respects OPENAI_MODEL override [0.59ms]
(fail) getRateLimitResetDelayMs - providers without reset headers > returns null for vertex [1.95ms]
(fail) getRateLimitResetDelayMs - providers without reset headers > returns null for bedrock [1.87ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > reads x-ratelimit-reset-requests duration string [2.27ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > works for github provider too [1.75ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > returns null when no openai rate limit headers present [2.32ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > reads x-ratelimit-reset-tokens and picks the larger delay [1.83ms]
(fail) parseOpenAIDuration > returns null for empty string [2.23ms]
(fail) parseOpenAIDuration > parses minutes+seconds: "6m0s" → 360000 [2.17ms]
(fail) parseOpenAIDuration > parses hours+minutes+seconds: "1h30m0s" → 5400000 [2.10ms]
(fail) parseOpenAIDuration > parses milliseconds: "500ms" → 500 [2.17ms]
(fail) parseOpenAIDuration > returns null for unrecognized format [2.04ms]
(fail) parseOpenAIDuration > parses seconds: "1s" → 1000 [2.43ms]
(fail) parseOpenAIDuration > parses minutes only: "2m" → 120000 [2.02ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > reads anthropic-ratelimit-unified-reset Unix timestamp [2.27ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > returns null when header absent [2.33ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > returns null when reset is in the past [2.14ms]
(fail) opens the model picker without awaiting local model discovery refresh [4.71ms]

--seed=3657557856
403 pass
56 fail
24 errors
544 expect() calls
Ran 459 tests across 80 files. [2.46s]
1 tests failed:
(fail) Windows clipboard fallback > uses PowerShell instead of clip.exe for local Windows copy [3.29ms]

--seed=462867619
536 pass
1 fail
794 expect() calls
Ran 537 tests across 80 files. [2.42s]
11 tests failed:
(fail) Windows clipboard fallback > uses PowerShell instead of clip.exe for local Windows copy [4.04ms]
(fail) Gemini provider main loop setting defaults to sonnet [1.92ms]
(fail) OpenAI provider main loop setting defaults to opus [0.77ms]
(fail) Gemini provider correctly applies GEMINI_MODEL environment override [0.84ms]
(fail) OpenAI provider correctly applies OPENAI_MODEL environment override [0.73ms]
(fail) OpenAI provider loads expected default models (Fallback logic check) [0.76ms]
(fail) OpenAI provider main loop setting respects OPENAI_MODEL override [0.81ms]
(fail) Codex provider main loop setting defaults to opus [0.74ms]
(fail) Codex provider main loop setting respects OPENAI_MODEL override [0.75ms]
(fail) Gemini provider main loop setting respects GEMINI_MODEL override [0.69ms]
(fail) Gemini provider loads expected default models (Fallback logic check) [0.67ms]

--seed=3526493823
526 pass
11 fail
786 expect() calls
Ran 537 tests across 80 files. [3.01s]
56 tests failed:
(fail) loadConversationForResume rejects oversized transcripts before resume hooks run [958.11ms]
(fail) PromptInputQueuedCommands > shows a next-turn guidance banner for queued prompt messages [2.58ms]
(fail) opens the model picker without awaiting local model discovery refresh [4.07ms]
(fail) checkDomainBlocklist > returns allowed without API call in Gemini mode [1.58ms]
(fail) checkDomainBlocklist > returns allowed without API call in OpenAI mode [1.34ms]
(fail) checkDomainBlocklist > calls Anthropic domain check in first-party mode [1.45ms]
(fail) Windows clipboard fallback > uses PowerShell instead of clip.exe for local Windows copy [3.61ms]
(fail) getRateLimitResetDelayMs - providers without reset headers > returns null for vertex [2.16ms]
(fail) getRateLimitResetDelayMs - providers without reset headers > returns null for bedrock [1.59ms]
(fail) parseOpenAIDuration > returns null for unrecognized format [2.04ms]
(fail) parseOpenAIDuration > parses milliseconds: "500ms" → 500 [1.67ms]
(fail) parseOpenAIDuration > parses hours+minutes+seconds: "1h30m0s" → 5400000 [1.90ms]
(fail) parseOpenAIDuration > returns null for empty string [1.61ms]
(fail) parseOpenAIDuration > parses seconds: "1s" → 1000 [2.08ms]
(fail) parseOpenAIDuration > parses minutes only: "2m" → 120000 [1.65ms]
(fail) parseOpenAIDuration > parses minutes+seconds: "6m0s" → 360000 [1.64ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > returns null when reset is in the past [1.70ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > returns null when header absent [1.97ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > reads anthropic-ratelimit-unified-reset Unix timestamp [1.66ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > reads x-ratelimit-reset-tokens and picks the larger delay [1.60ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > reads x-ratelimit-reset-requests duration string [1.73ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > returns null when no openai rate limit headers present [1.70ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > works for github provider too [1.86ms]
(fail) OpenAI provider main loop setting respects OPENAI_MODEL override [0.93ms]
(fail) Gemini provider main loop setting respects GEMINI_MODEL override [0.79ms]
(fail) Codex provider main loop setting defaults to opus [0.78ms]
(fail) OpenAI provider main loop setting defaults to opus [0.76ms]
(fail) Codex provider main loop setting respects OPENAI_MODEL override [1.22ms]
(fail) OpenAI provider loads expected default models (Fallback logic check) [0.85ms]
(fail) Gemini provider correctly applies GEMINI_MODEL environment override [0.77ms]
(fail) Gemini provider main loop setting defaults to sonnet [0.88ms]
(fail) Gemini provider loads expected default models (Fallback logic check) [0.91ms]

--seed=3968004562
403 pass
56 fail
24 errors
542 expect() calls
Ran 459 tests across 80 files. [1.75s]
90 tests failed:
(fail) saveGeminiAccessToken stores and reads back the token [1.52ms]
(fail) clearGeminiAccessToken removes the stored token [1.31ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > works for github provider too [1.94ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > returns null when no openai rate limit headers present [2.12ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > reads x-ratelimit-reset-tokens and picks the larger delay [1.99ms]
(fail) getRateLimitResetDelayMs - OpenAI provider > reads x-ratelimit-reset-requests duration string [2.05ms]
(fail) getRateLimitResetDelayMs - providers without reset headers > returns null for vertex [1.90ms]
(fail) getRateLimitResetDelayMs - providers without reset headers > returns null for bedrock [2.07ms]
(fail) parseOpenAIDuration > parses milliseconds: "500ms" → 500 [2.66ms]
(fail) parseOpenAIDuration > parses seconds: "1s" → 1000 [2.16ms]
(fail) parseOpenAIDuration > parses hours+minutes+seconds: "1h30m0s" → 5400000 [2.01ms]
(fail) parseOpenAIDuration > parses minutes+seconds: "6m0s" → 360000 [2.13ms]
(fail) parseOpenAIDuration > returns null for unrecognized format [2.21ms]
(fail) parseOpenAIDuration > parses minutes only: "2m" → 120000 [2.54ms]
(fail) parseOpenAIDuration > returns null for empty string [2.02ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > returns null when header absent [2.54ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > reads anthropic-ratelimit-unified-reset Unix timestamp [1.95ms]
(fail) getRateLimitResetDelayMs - Anthropic (firstParty) > returns null when reset is in the past [1.96ms]
(fail) fastMode ant-only fallback cleanup > resolveFastModeStatusFromCache does not force-enable from USER_TYPE=ant [3.29ms]
(fail) fastMode ant-only fallback cleanup > prefetchFastModeStatus without auth does not force-enable from USER_TYPE=ant [1.68ms]
(fail) fastMode ant-only fallback cleanup > prefetchFastModeStatus network failure does not force-enable from USER_TYPE=ant [2.47ms]
(fail) prefetchOfficialMcpUrls > fetches registry in first-party mode [1.68ms]
(fail) prefetchOfficialMcpUrls > does not fetch registry when using Gemini mode [1.26ms]
(fail) prefetchOfficialMcpUrls > does not fetch registry when using OpenAI mode [1.17ms]
(fail) loadConversationForResume rejects oversized transcripts before resume hooks run [29.40ms]
(fail) checkDomainBlocklist > calls Anthropic domain check in first-party mode [1.94ms]
(fail) checkDomainBlocklist > returns allowed without API call in OpenAI mode [1.86ms]
(fail) checkDomainBlocklist > returns allowed without API call in Gemini mode [1.74ms]
(fail) clipboard path behavior remains stable > Windows clipboard fallback is skipped over SSH [2.07ms]
(fail) clipboard path behavior remains stable > getClipboardPath stays tmux-buffer when TMUX is set [1.83ms]
(fail) clipboard path behavior remains stable > local macOS clipboard fallback still uses pbcopy [1.58ms]
(fail) clipboard path behavior remains stable > getClipboardPath stays native on local macOS [1.78ms]
(fail) Windows clipboard fallback > uses PowerShell instead of clip.exe for local Windows copy [1.86ms]
(fail) Windows clipboard fallback > passes Windows clipboard text through a UTF-8 temp file instead of stdin [1.76ms]
(fail) opens the model picker without awaiting local model discovery refresh [4.93ms]
(fail) applyActiveProviderProfileFromConfig > re-applies active profile when profile-managed env drifts [3.25ms]
(fail) applyActiveProviderProfileFromConfig > does not re-apply active profile when flags conflict with current provider [2.85ms]
(fail) applyActiveProviderProfileFromConfig > does not override explicit startup selection when profile marker is stale [4.00ms]
(fail) applyActiveProviderProfileFromConfig > applies active profile when no explicit provider is selected [2.91ms]
(fail) applyActiveProviderProfileFromConfig > does not override explicit startup provider selection [2.88ms]
(fail) deleteProviderProfile > deleting final profile preserves explicit startup provider env [2.92ms]
(fail) deleteProviderProfile > deleting final profile clears provider env when active profile applied it [3.33ms]
(fail) getProviderPresetDefaults > ollama preset defaults to a local Ollama model [3.49ms]
(fail) persistActiveProviderProfileModel > does not mutate process env when session is not profile-managed [2.94ms]
(fail) persistActiveProviderProfileModel > updates active profile model and current env for profile-managed sessions [2.93ms]
(fail) applyProviderProfileToProcessEnv > openai profile clears competing gemini/github flags [3.40ms]
(fail) applyProviderProfileToProcessEnv > anthropic profile clears competing gemini/github flags [3.59ms]

--seed=477758715
237 pass
90 fail
43 errors
267 expect() calls
Ran 327 tests across 80 files. [1.63s]

This issue was caused by Bun's single-process global state and environment.
Ref: https://bun.com/docs/test/runtime-behavior#single-process

Single Process
The test runner runs all tests in a single process by default. This provides faster startup and shared memory, but it means:

  • Tests share global state (use lifecycle hooks to clean up)
  • One test crash can affect others
  • No true parallelization of individual tests

Addressing this issue thoroughly will require further investigation later.
For my current PR, I will just push an additional commit to handle the init process and cleanup process more strictly in providerModelDefaults.test.ts.

@Miyoko076 Miyoko076 marked this pull request as draft April 7, 2026 11:01
@kali113
Copy link
Copy Markdown

kali113 commented Apr 7, 2026

ok so i looked at this pr because the gemini-2.5-pro-preview-03-25 got discontinued (thanks google very cool) and honestly the fix is pretty solid.

commit f6d1528 swaps the dead model for gemini-2.5-pro in both configs.ts and model.ts including that hardcoded fallback at line 130 that the reviewer caught which was honestly kinda nasty ngl. then theres commit 6187733 which is actually a really clean refactor they made this EXTERNAL_PROVIDER_DEFAULTS thing so everything pulls from one place now instead of hardcoded strings scattered everywhere which means future updates will be way easier.

and they added tests in 7026dc9 like 183 lines covering gemini openai and codex including a test that specifically checks the old preview model is GONE which is pretty smart.

BUT and this is a big but the tests have this spy bug in providerModelDefaults.test.ts at lines 142 154 167 177 they use spyOn(providers, 'getAPIProvider') but the mockRestore only runs if the test passes so if an assertion fails the spy never gets restored and the next test gets the wrong provider and everything goes chaotic which is probably why theres random test failures happening.

the fix would be wrapping those in try/finally so mockRestore always runs or just adding spy cleanup to afterEach.

also i checked the whole codebase for the old model name and the only place that mentions gemini-2.5-pro-preview-03-25 now is that negative assertion in the test file which is literally checking it doesnt exist so thats fine. all production code uses the new stable model.

verdict is the pr fixes the issue and the refactor is genuinely good architecture but that test bug needs fixing before merge or the random failures will keep happening. also someone please add a newline at the end of the test file my ocd is screaming. ok thats it time to crash gn

The gemini-2.5-pro-preview-03-25 model has been discontinued. Replaced the opus mapping with the stable gemini-2.5-pro version to ensure continued functionality.

Reference: https://ai.google.dev/gemini-api/docs/deprecations?hl=ko#gemini-2.5-pro-models
Updated model.ts to use EXTERNAL_PROVIDER_DEFAULTS instead of hardcoded values.

Added CODEX_MODEL_DEFAULTS and unified provider mappings in configs.ts.
Following the centralization of provider defaults, added comprehensive unit tests to ensure fallback logic and env overrides (e.g., OPENAI_MODEL, GEMINI_MODEL) work correctly for all supported 3P providers (Gemini, OpenAI, Codex, GitHub). Uses `bun:test` spies to mock the Codex provider state to avoid auth dependency issues during testing.
@Miyoko076

This comment was marked as low quality.

@Miyoko076

This comment was marked as low quality.

@Miyoko076 Miyoko076 reopened this Apr 8, 2026
@Miyoko076 Miyoko076 marked this pull request as ready for review April 8, 2026 11:54
@Miyoko076 Miyoko076 marked this pull request as draft April 8, 2026 12:55
@Miyoko076
Copy link
Copy Markdown
Author

Miyoko076 commented Apr 8, 2026

providerModelDefaults.test.ts is very unstable. I spent several hours trying to fix it, but I have no idea how to resolve that issue. It fails intermittently with bun test --randomize, which is weird.

import { expect, spyOn, test, type Mock } from 'bun:test'

import {
  getDefaultHaikuModel,
  getDefaultOpusModel,
  getDefaultSonnetModel,
  getDefaultMainLoopModelSetting,
} from './model.js'
import * as providers from './providers.js'

const ISOLATED_ENV_KEYS = [
  'ANTHROPIC_DEFAULT_OPUS_MODEL',
  'ANTHROPIC_DEFAULT_SONNET_MODEL',
  'ANTHROPIC_DEFAULT_HAIKU_MODEL',
  'ANTHROPIC_MODEL',
  'GEMINI_MODEL',
  'OPENAI_MODEL',
  'CLAUDE_CODE_USE_GEMINI',
  'CLAUDE_CODE_USE_OPENAI',
] as const

type IsolatedEnvKey = typeof ISOLATED_ENV_KEYS[number]

// Note: Synchronous execution only. test.concurrent and async/await are prohibited to prevent race conditions.
function runWithSandbox(
  envOverrides: Partial>,
  testFn: (cleanupSpies: Mock[]) => void | unknown
) {
  const backupEnv: Record = {}
  const spiesToRestore: Mock[] = []

  for (const key of ISOLATED_ENV_KEYS) {
    backupEnv[key] = process.env[key]
    delete process.env[key]
  }

  for (const key of ISOLATED_ENV_KEYS) {
    const overrideValue = envOverrides[key]
    if (overrideValue !== undefined) {
      process.env[key] = overrideValue
    }
  }

  try {
    const result = testFn(spiesToRestore)
    
    if (result instanceof Promise) {
      throw new Error('runWithSandbox: testFn must be synchronous.')
    }
  } finally {
    for (const key of ISOLATED_ENV_KEYS) {
      const originalValue = backupEnv[key]
      if (originalValue === undefined) {
        delete process.env[key]
      } else {
        process.env[key] = originalValue
      }
    }

    for (const spy of spiesToRestore) {
      spy.mockRestore()
    }
  }
}

// --- Gemini Provider Tests ---

test('Gemini provider loads expected default models (Fallback logic check)', () => {
  runWithSandbox({}, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('gemini')
    spies.push(providerSpy)

    expect(getDefaultOpusModel()).toBe('gemini-2.5-pro')
    expect(getDefaultSonnetModel()).toBe('gemini-2.0-flash')
    expect(getDefaultHaikuModel()).toBe('gemini-2.0-flash-lite')
  })
})

test('Gemini provider does not reference discontinued preview model', () => {
  runWithSandbox({}, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('gemini')
    spies.push(providerSpy)

    expect(getDefaultOpusModel()).not.toContain('gemini-2.5-pro-preview-03-25')
    expect(getDefaultSonnetModel()).not.toContain('gemini-2.5-pro-preview-03-25')
    expect(getDefaultHaikuModel()).not.toContain('gemini-2.5-pro-preview-03-25')
  })
})

test('Gemini provider correctly applies GEMINI_MODEL environment override', () => {
  runWithSandbox({ GEMINI_MODEL: 'gemini-override' }, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('gemini')
    spies.push(providerSpy)

    expect(getDefaultOpusModel()).toBe('gemini-override')
    expect(getDefaultSonnetModel()).toBe('gemini-override')
    expect(getDefaultHaikuModel()).toBe('gemini-override')
  })
})

test('Gemini provider main loop setting defaults to sonnet', () => {
  runWithSandbox({}, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('gemini')
    spies.push(providerSpy)

    expect(getDefaultMainLoopModelSetting()).toBe('gemini-2.0-flash')
  })
})

test('Gemini provider main loop setting respects GEMINI_MODEL override', () => {
  runWithSandbox({ GEMINI_MODEL: 'gemini-override' }, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('gemini')
    spies.push(providerSpy)

    expect(getDefaultMainLoopModelSetting()).toBe('gemini-override')
  })
})

// --- OpenAI Provider Tests ---

test('OpenAI provider loads expected default models (Fallback logic check)', () => {
  runWithSandbox({}, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('openai')
    spies.push(providerSpy)

    expect(getDefaultOpusModel()).toBe('gpt-4o')
    expect(getDefaultSonnetModel()).toBe('gpt-4o')
    expect(getDefaultHaikuModel()).toBe('gpt-4o-mini')
  })
})

test('OpenAI provider correctly applies OPENAI_MODEL environment override', () => {
  runWithSandbox({ OPENAI_MODEL: 'openai-override' }, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('openai')
    spies.push(providerSpy)

    expect(getDefaultOpusModel()).toBe('openai-override')
    expect(getDefaultSonnetModel()).toBe('openai-override')
    expect(getDefaultHaikuModel()).toBe('openai-override')
  })
})

test('OpenAI provider main loop setting defaults to opus', () => {
  runWithSandbox({}, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('openai')
    spies.push(providerSpy)

    expect(getDefaultMainLoopModelSetting()).toBe('gpt-4o')
  })
})

test('OpenAI provider main loop setting respects OPENAI_MODEL override', () => {
  runWithSandbox({ OPENAI_MODEL: 'openai-override' }, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('openai')
    spies.push(providerSpy)

    expect(getDefaultMainLoopModelSetting()).toBe('openai-override')
  })
})

// --- Codex Provider Tests ---

test('Codex provider loads expected default models (Fallback logic check)', () => {
  runWithSandbox({}, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('codex')
    spies.push(providerSpy)
    
    expect(getDefaultOpusModel()).toBe('gpt-5.4')
    expect(getDefaultSonnetModel()).toBe('gpt-5.4')
    expect(getDefaultHaikuModel()).toBe('gpt-5.4-mini')
  })
})

test('Codex provider correctly applies OPENAI_MODEL environment override', () => {
  runWithSandbox({ OPENAI_MODEL: 'codex-override' }, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('codex')
    spies.push(providerSpy)
    
    expect(getDefaultOpusModel()).toBe('codex-override')
    expect(getDefaultSonnetModel()).toBe('codex-override')
    expect(getDefaultHaikuModel()).toBe('codex-override')
  })
})

test('Codex provider main loop setting defaults to opus', () => {
  runWithSandbox({}, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('codex')
    spies.push(providerSpy)
    
    expect(getDefaultMainLoopModelSetting()).toBe('gpt-5.4')
  })
})

test('Codex provider main loop setting respects OPENAI_MODEL override', () => {
  runWithSandbox({ OPENAI_MODEL: 'codex-override' }, (spies) => {
    const providerSpy = spyOn(providers, 'getAPIProvider').mockReturnValue('codex')
    spies.push(providerSpy)
    
    expect(getDefaultMainLoopModelSetting()).toBe('codex-override')
  })
})

@Miyoko076
Copy link
Copy Markdown
Author

Closing this PR as it is now a duplicate of #511. Apologies for taking so long to update this; please proceed with the patch in #511 instead.

@Miyoko076 Miyoko076 closed this Apr 8, 2026
@kali113
Copy link
Copy Markdown

kali113 commented Apr 8, 2026

@Miyoko076 you were first, his pr was a duplicate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Default Gemini models are discontinued and need to be updated

4 participants