Skip to content

fix(codex): omit max_output_tokens for ChatGPT codex backend#281

Open
qiaoborui wants to merge 1 commit intosipeed:mainfrom
qiaoborui:codex/fix-codex-max-output-tokens-400
Open

fix(codex): omit max_output_tokens for ChatGPT codex backend#281
qiaoborui wants to merge 1 commit intosipeed:mainfrom
qiaoborui:codex/fix-codex-max-output-tokens-400

Conversation

@qiaoborui
Copy link
Contributor

@qiaoborui qiaoborui commented Feb 16, 2026

Summary

  • stop sending max_output_tokens in Codex provider requests
  • keep model aliases like gpt-5.3-codex unchanged
  • add a regression assertion that MaxOutputTokens is omitted in codex params

Why

ChatGPT codex backend currently rejects max_output_tokens with HTTP 400 (Unsupported parameter: max_output_tokens).

Fixes #279

Testing

  • go test ./pkg/providers -run 'TestResolveCodexModel|TestBuildCodexParams_BasicMessage|TestCodexProvider_ChatRoundTrip'
  • manual: go run ./cmd/picoclaw agent -m "hi"

@Leeaandrob
Copy link
Collaborator

@Zepan Fixes Codex backend 400 error by omitting max_output_tokens — ChatGPT codex backend rejects this parameter. Small targeted fix.

Recommendation: Merge. +7/-3, prevents API errors for Codex users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OPENAI provider still 400 Bad Request

2 participants