feat(provider): add ChatGPT Codex provider flow#397
feat(provider): add ChatGPT Codex provider flow#397EngineerProjects wants to merge 1 commit intoGitlawb:mainfrom
Conversation
Add Codex as a first-class provider in the provider manager, including reuse of detected local Codex login, built-in Codex model choices, and updated docs for the new setup path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
The current model-option injection is too broad for Codex. On this head, Codex models are added for both openai and codex providers, which means ordinary OpenAI-compatible profiles can now surface Codex-only models. Direct repro: with CLAUDE_CODE_USE_OPENAI=1, OPENAI_BASE_URL=https://api.openai.com/v1, and OPENAI_MODEL=gpt-5.4, getAPIProvider() flips to codex, but
esolveProviderRequest() still keeps ransport: 'chat_completions' and �aseUrl: 'https://api.openai.com/v1'. So selecting a Codex model from a normal OpenAI profile silently pushes a Codex model string through a non-Codex backend. The Codex option list should only appear for actual Codex profiles, not every OpenAI-compatible provider.
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Rechecked the latest head 4c6b9475848cdf9e75277142ec4f3bdae760c805 against current origin/main.
I still can't approve this because there is one real provider-routing bug in the current implementation.
Current blocker:
-
Codex model options are injected into ordinary OpenAI-compatible providers, which breaks provider semantics.
Insrc/utils/model/modelOptions.ts, the branch addsgetCodexModelOptions()for bothopenaiandcodexproviders. That means normal OpenAI-compatible profiles (OpenAI API, Groq, DeepSeek, LM Studio, etc.) can now surface Codex-only models.Direct repro on this head:
CLAUDE_CODE_USE_OPENAI=1OPENAI_BASE_URL=https://api.openai.com/v1OPENAI_MODEL=gpt-5.4
Results:
getAPIProvider()->codexresolveProviderRequest({ model: 'gpt-5.4' })->transport: 'chat_completions',baseUrl: 'https://api.openai.com/v1'
So selecting a Codex model from a normal OpenAI profile silently flips provider detection to
codexwhile still sending the request through a non-Codex backend. The Codex option list should only appear for actual Codex profiles, not every OpenAI-compatible provider.
What I verified on this head:
- direct repro of the provider-flip behavior above on both OpenAI and another OpenAI-compatible base URL
- Codex profile persistence itself is wired correctly (
provider: 'codex'survives sanitization and appliesCODEX_API_KEYinstead ofOPENAI_API_KEY) - local-login detection in
ProviderManageris internally consistent withresolveCodexApiCredentials()on this machine bun test src/utils/providerProfiles.test.ts src/commands/provider/provider.test.tsx src/utils/providerProfile.test.ts-> 53 passbun run build-> successbun run smoke-> success
I didn't find a compile/runtime blocker outside that, but I wouldn't merge this until Codex-only model options stop appearing under generic OpenAI-compatible providers.
|
Thanks for the work on this. The Codex provider flow is a nice addition, but I do not think this one is ready to land just yet. The main blocker is that Codex-only model options are still being added under the generic OpenAI-compatible provider path. In There is also a current CI failure on the branch, and the failing log shows new provider profile tests calling helpers that are not available in that test scope. So at the moment the PR is trying to ship a new provider flow while tripping over its own shoelaces in CI. Please keep Codex model choices limited to actual Codex profiles, and fix the failing test imports/setup so the branch is green before merge. |
Add Codex as a first-class provider in the provider manager, including reuse of detected local Codex login, built-in Codex model choices, and updated docs for the new setup path.
Summary
Added
codexas a first-class provider type in the provider manager and provider profile config.Exposed a dedicated
ChatGPT / Codexpreset in the modern provider manager UI.Added detection of existing local Codex credentials so users can reuse a detected local Codex login instead of always entering a token manually.
Added a built-in Codex model selection step so users can choose known Codex model options without typing model IDs by hand.
Updated provider environment application logic so Codex profiles use
CODEX_API_KEYinstead ofOPENAI_API_KEY.Updated docs to explain the new Codex provider flow and local-login reuse path.
This changed because Codex support already existed in the runtime/backend, but it was not exposed cleanly in the modern provider-manager flow.
This makes the Codex path discoverable, easier to configure, and more aligned with how users actually authenticate when they already use the Codex CLI locally.
Impact
user-facing impact:
ChatGPT / Codexdirectly in the provider manager.developer/maintainer impact:
codexas an explicit provider profile type in the provider manager flow.Testing
bun run buildbun run smokebun run devbun test src/utils/providerProfiles.test.tsbun test src/commands/provider/provider.test.tsxbun test src/utils/providerProfile.test.tsNotes
provider/model path tested:
https://chatgpt.com/backend-api/codexcodexplanscreenshots attached (if UI changed):