Harden Groq provider support and custom model handling#296
Harden Groq provider support and custom model handling#296BaranBey1331 wants to merge 7 commits intoGitlawb:mainfrom
Conversation
Align codex model ordering across picker, startup display, and provider resolution so newer GPT-5 variants appear in a consistent descending flow. Add regression coverage for codex provider detection and picker ordering. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
fix codex model ordering and detection
Add Groq as a first-class OpenAI-compatible provider with custom model entry, improve tool-call compatibility for Groq-like APIs, and fix GitHub onboarding so stale provider env state is cleared when switching providers. Also polish the provider/onboarding TUI and add targeted release-focused tests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Make --provider switching clear stale provider-selection and OpenAI-compatible routing env so GitHub, Groq, OpenAI, and Anthropic transitions do not inherit the previous provider's base URL or API key. Add Claude-authored regression coverage for the switching paths. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
fix: add Groq provider support and harden onboarding
Preserve Groq provider flow while keeping newer Gemini auth and Codex alias handling from main, and reconcile related tests so the branch is merge-clean. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Conflict resolution update pushed. What I fixed in the conflict set:
Conflict cleanliness checks:
Focused test evidence (expanded):
Combined focused run with sanitized env:
Net: Groq path remains working, conflict set is resolved cleanly, and branch is updated for merge review. |
gnanam1990
left a comment
There was a problem hiding this comment.
Good work overall — the Groq routing, shim hardening, and test coverage are solid. Two logic bugs need fixing before merge.
Bug 1 — safeGroqModel reads the wrong env var (provider.tsx:576)
// Current — inherits OPENAI_MODEL from a previous OpenAI session
const safeGroqModel =
sanitizeProviderConfigValue(processEnv.OPENAI_MODEL, processEnv) ||
DEFAULT_GROQ_MODELIf the user previously had OpenAI configured, OPENAI_MODEL will be something like gpt-4o and the Groq wizard will pre-fill that as the default Groq model. Fix:
// Option A — always start from the Groq default (simplest)
const safeGroqModel = DEFAULT_GROQ_MODEL
// Option B — use a dedicated GROQ_MODEL env var if you want session persistence
const safeGroqModel =
sanitizeProviderConfigValue(processEnv.GROQ_MODEL, processEnv) ||
DEFAULT_GROQ_MODELBug 2 — groq-model step passes empty processEnv: {} (provider.tsx:714)
const env = buildGroqProfileEnv({
apiKey: step.apiKey,
model: value.trim() || DEFAULT_GROQ_MODEL,
baseUrl: DEFAULT_GROQ_BASE_URL,
processEnv: {}, // ← sanitizer can't see the real env
})Every other provider step passes process.env here (buildOpenAIProfileEnv, buildGeminiProfileEnv, etc.). This should be consistent:
processEnv: process.env,Please fix both and re-request review — happy to approve once these are addressed.
Normalize Groq credential handling, prevent cross-provider env leaks, and add current Groq-hosted models while keeping arbitrary custom Groq model IDs first-class across the picker and runtime helpers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Updated with the latest Groq fixes and tests. Review-requested fixes
Additional bug fixes in this update
Newly supported Groq-hosted model IDs
Testing
The new code is now pushed on top of the PR branch. |
|
@gnanam1990 can you review if possible |
| if (model) process.env.GEMINI_MODEL = model | ||
| break | ||
|
|
||
| case 'groq': |
There was a problem hiding this comment.
This still leaves the new --provider groq path broken for GROQ_API_KEY-only setups. On the current head, �pplyProviderFlag('groq', ['--model','llama-3.3-70b-versatile']) with only GROQ_API_KEY=gsk-only still makes getProviderValidationError() return OPENAI_API_KEY is required when CLAUDE_CODE_USE_OPENAI=1 and OPENAI_BASE_URL is not local. So the first-class Groq flag path still requires users to duplicate the key into OPENAI_API_KEY.
|
|
||
| env.OPENAI_BASE_URL = persistedOpenAIBaseUrl || DEFAULT_GROQ_BASE_URL | ||
| env.OPENAI_MODEL = persistedOpenAIModel || DEFAULT_GROQ_MODEL | ||
| env.OPENAI_API_KEY = |
There was a problem hiding this comment.
Groq-specific flows still give stale OPENAI_API_KEY precedence over GROQ_API_KEY. On this head, �uildLaunchEnv({ profile: 'groq', ... processEnv: { OPENAI_API_KEY: 'sk-openai', GROQ_API_KEY: 'gsk-groq' } }) emits OPENAI_API_KEY=sk-openai, and an actual Groq request then sends Authorization: Bearer sk-openai. That means switching from OpenAI to Groq can still leak the wrong credential into Groq mode.
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Rechecked the latest head b5912b3ab0b3345451961f24862bb1f50bf34d64.
The earlier review feedback about the Groq wizard default model and the empty processEnv sanitizer path is fixed on this head.
I still can't approve this branch because I can reproduce two blocker-level Groq credential regressions:
-
GROQ_API_KEY-only startup still fails in Groq mode.
src/utils/providerValidation.tsstill only acceptsOPENAI_API_KEYfor non-local OpenAI-compatible providers. On this head, with:CLAUDE_CODE_USE_OPENAI=1CLAUDE_CODE_USE_GROQ=1GROQ_API_KEY=gsk-onlyOPENAI_BASE_URL=https://api.groq.com/openai/v1OPENAI_MODEL=llama-3.3-70b-versatile
getProviderValidationError()returnsOPENAI_API_KEY is required when CLAUDE_CODE_USE_OPENAI=1 and OPENAI_BASE_URL is not local.The same failure happens after
applyProviderFlag('groq', ...), so the new--provider groqpath still requires users to duplicate the key intoOPENAI_API_KEY. -
Groq-specific flows still prefer stale
OPENAI_API_KEYoverGROQ_API_KEY.
Insrc/utils/providerProfile.ts, bothbuildGroqProfileEnv()and the Groq branch ofbuildLaunchEnv()takeprocessEnv.OPENAI_API_KEYbeforeprocessEnv.GROQ_API_KEY. On this head, a direct repro with:OPENAI_API_KEY=sk-openaiGROQ_API_KEY=gsk-groq
makes
buildLaunchEnv({ profile: 'groq', ... })emitOPENAI_API_KEY=sk-openai, and an actual Groq request then sendsAuthorization: Bearer sk-openaiinstead of the Groq key. That means switching from OpenAI to Groq can still leak the wrong credential into Groq mode.
Fresh verification on this head:
bun test src/services/api/openaiShim.test.ts src/commands/provider/provider.test.tsx src/utils/providerProfile.test.ts src/utils/model/providers.test.ts src/utils/providerFlag.test.ts src/commands/onboard-github/onboard-github.test.tsx src/utils/model/modelOptions.test.ts-> 117 passbun run build-> success
So the provider/Groq test suite is green, but the current head still has real credential-handling regressions in the new Groq flow. I wouldn't merge it until those are fixed.
gnanam1990
left a comment
There was a problem hiding this comment.
There is good work here, but I still see an important state-leak problem.
It looks like OPENAI_MODEL can still remain stale when switching providers without an explicit --model, especially when moving into Groq or GitHub flows. That leaves one of the core provider-switch leakage problems still unresolved.
Please fix the stale model carryover path and add a focused provider-switch test that covers switching providers without --model.
vibecoded with openclaude, needs review
Summary
Bugs fixed
OPENAI_MODELvalue from a previous OpenAI session.GROQ_API_KEYworks everywhere the code previously only handledOPENAI_API_KEY.GROQ_API_KEYcleanup during provider switching, profile application, and GitHub onboarding.groqis treated consistently alongside other OpenAI-compatible providers when resolving current/custom models.Improvements added
openai/gpt-oss-120bopenai/gpt-oss-20bmeta-llama/llama-4-maverick-17b-128e-instructmeta-llama/llama-4-scout-17b-16e-instructGROQ_API_KEYhandling, provider cleanup, streaming behavior, and tool-call mapping.Live Groq validation
https://api.groq.com/openai/v1.openai/gpt-oss-20b.reasoningfield without breaking the current shim flow.thinkingblocks are not forwarded into Groq requests.Testing
bun test src/services/api/openaiShim.test.ts src/commands/provider/provider.test.tsx src/utils/providerProfile.test.ts src/utils/model/providers.test.ts src/utils/providerFlag.test.ts src/commands/onboard-github/onboard-github.test.tsx115 pass, 0 failbun test src/utils/model/modelOptions.test.ts src/utils/providerProfile.test.ts src/services/api/openaiShim.test.ts src/commands/provider/provider.test.tsx src/utils/model/providers.test.ts src/utils/providerFlag.test.ts src/commands/onboard-github/onboard-github.test.tsx117 pass, 0 failopenai/gpt-oss-20b: successNotes
OPENAI_API_KEYin-request, whileGROQ_API_KEYis now accepted and normalized consistently at the edges.bun run typecheck/ wider repo validation was not used here as this branch already has unrelated repository-wide churn; the coverage above is focused on the Groq/provider surface changed by this PR.