This file provides coding guidance for AI agents (including Claude Code, Codex, and others) when working with code in this repository.
This is an opencode plugin that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access gpt-5.2-codex, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini, gpt-5.2, and gpt-5.1 models through their ChatGPT subscription instead of using OpenAI Platform API credits. Legacy GPT-5.0 models are automatically normalized to their GPT-5.1 equivalents.
Key architecture principle: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management.
# Build (compiles TypeScript + copies HTML file)
npm run build
# Type checking only (no build)
npm run typecheck
# Run all tests
npm test
# Watch mode for TDD
npm run test:watch
# Interactive test UI
npm run test:ui
# Coverage report
npm run test:coverageImportant: The build script has a critical step that copies lib/oauth-success.html to dist/lib/. This HTML file is required for the OAuth callback flow.
The main entry point orchestrates a 7-step fetch flow:
- Token Management: Check token expiration, refresh if needed
- URL Rewriting: Transform OpenAI Platform API URLs → ChatGPT backend API (
https://chatgpt.com/backend-api/codex/responses) - Request Transformation:
- Normalize model names (all variants →
gpt-5.2,gpt-5.2-codex,gpt-5.1,gpt-5.1-codex,gpt-5.1-codex-max,gpt-5.1-codex-mini,gpt-5,gpt-5-codex, orcodex-mini-latest) - Inject Codex system instructions from latest GitHub release
- Apply reasoning configuration (effort, summary, verbosity)
- Add CODEX_MODE bridge prompt (default) or tool remap message (legacy)
- Filter OpenCode system prompts when in CODEX_MODE
- Filter conversation history (remove
rs_*IDs for stateless operation)
- Normalize model names (all variants →
- Headers: Add OAuth token + ChatGPT account ID
- Request Execution: Send to Codex backend
- Response Logging: Optional debug logging (ENABLE_PLUGIN_REQUEST_LOGGING=1)
- Response Handling: Convert SSE to JSON (non-tool requests) or pass through
Core Plugin (index.ts)
- Plugin definition and main fetch orchestration
- OAuth loader (extracts ChatGPT account ID from JWT)
- Configuration loading and CODEX_MODE determination
Authentication (lib/auth/)
auth.ts: OAuth flow (PKCE, token exchange, JWT decoding, refresh)server.ts: Local HTTP server for OAuth callback (port 1455)browser.ts: Platform-specific browser opening
Request Handling (lib/request/)
fetch-helpers.ts: 10 focused helper functions for main fetch flowrequest-transformer.ts: Body transformations (model normalization, reasoning config, input filtering)response-handler.ts: SSE to JSON conversion
Prompts (lib/prompts/)
codex.ts: Fetches Codex instructions from GitHub (ETag-cached), tool remap messagecodex-opencode-bridge.ts: CODEX_MODE bridge prompt for CLI parity
Configuration (lib/)
config.ts: Plugin config loading, CODEX_MODE determinationconstants.ts: All magic values, URLs, error messagestypes.ts: TypeScript type definitionslogger.ts: Debug logging (controlled by env var)
1. Stateless Operation: Uses store: false + include: ["reasoning.encrypted_content"]
- Allows multi-turn conversations without server-side storage
- Encrypted reasoning content persists context across turns
2. CODEX_MODE (enabled by default):
- Priority:
CODEX_MODEenv var >~/.opencode/openai-codex-auth-config.json> default (true) - When enabled: Filters out OpenCode system prompts, adds Codex-OpenCode bridge prompt with Task tool & MCP awareness
- When disabled: Uses legacy tool remap message
- Bridge prompt (~550 tokens): Tool mappings, available tools, working style, Task tool/sub-agent awareness, MCP tool awareness
- Prompt verification: Caches OpenCode's codex.txt from GitHub (ETag-based) to verify exact prompt removal, with fallback to text signature matching
3. Configuration Merging:
- Global options (
provider.openai.options) + per-model options (provider.openai.models[name].options) - Model-specific options override global
- Plugin defaults:
reasoningEffort: "medium",reasoningSummary: "auto",textVerbosity: "medium"
4. Model Normalization (GPT-5.0 → GPT-5.1 migration):
- All
gpt-5.2-codex*variants →gpt-5.2-codex(newest Codex model, supports xhigh) - All
gpt-5.1-codex-max*variants →gpt-5.1-codex-max - All
gpt-5.1-codex*variants →gpt-5.1-codex - All
gpt-5.1-codex-mini*variants →gpt-5.1-codex-mini - All
gpt-5.2variants →gpt-5.2 - All
gpt-5.1variants →gpt-5.1 - Legacy mappings (GPT-5.0 being phased out):
gpt-5-codex*variants →gpt-5.1-codexgpt-5-codex-mini*orcodex-mini-latest→gpt-5.1-codex-minigpt-5*variants (includinggpt-5-mini,gpt-5-nano) →gpt-5.1
minimaleffort auto-normalized tolowfor Codex families (including GPT-5.2 Codex) and clamped tomedium(orhighwhen requested) for Codex Mini
5. Model-Specific Prompt Selection:
- Different prompts for different model families (matching Codex CLI):
gpt-5.2-codex*→gpt-5.2-codex_prompt.md(117 lines, Codex CLI agent prompt)gpt-5.1-codex-max*→gpt-5.1-codex-max_prompt.md(117 lines, frontend design guidelines)gpt-5.1-codex*,codex-*→gpt_5_codex_prompt.md(105 lines, coding focus)gpt-5.2*→gpt_5_2_prompt.md(GPT‑5.2 general family)gpt-5.1*→gpt_5_1_prompt.md(368 lines, full behavioral guidance)
getModelFamily()determines prompt selection based on normalized model
6. Codex Instructions Caching:
- Fetches from latest release tag (not main branch)
- ETag-based HTTP conditional requests per model family
- Separate cache files per family:
gpt-5.2-codex-instructions.md,codex-max-instructions.md,codex-instructions.md,gpt-5.2-instructions.md,gpt-5.1-instructions.md - Cache invalidation when release tag changes
- Falls back to bundled version if GitHub unavailable
- Add to
ConfigOptionsinterface inlib/types.ts - Update
transformRequestBody()inlib/request/request-transformer.ts - Add tests in
test/request-transformer.test.ts - Document in README.md configuration section
All request transformations go through transformRequestBody():
- Input filtering:
filterInput(),filterOpenCodeSystemPrompts() - Message injection:
addCodexBridgeMessage()oraddToolRemapMessage() - Reasoning config:
getReasoningConfig()(follows Codex CLI defaults, not opencode defaults) - Model config:
getModelConfig()(merges global + per-model options)
OAuth implementation follows OpenAI Codex CLI patterns:
- Client ID:
app_EMoamEEZ73f0CkXaXp7hrann - PKCE with S256 challenge
- Special params:
codex_cli_simplified_flow=true,originator=codex_cli_rs - Callback server on port 1455 (matches Codex CLI)
- 191 comprehensive tests covering all modules
- Test files mirror source structure (
test/auth.test.ts↔lib/auth/auth.ts) - Mock-heavy testing (no actual network calls or file I/O in tests)
- Focus on edge cases: token expiration, model normalization, input filtering, CODEX_MODE toggling
This plugin intentionally differs from opencode defaults because it accesses ChatGPT backend API (not OpenAI Platform API):
| Setting | opencode Default | This Plugin Default | Reason |
|---|---|---|---|
reasoningEffort |
"high" (gpt-5) | "medium" (Codex Max defaults to "high") | Matches Codex CLI default and Codex Max capabilities |
textVerbosity |
"low" (gpt-5) | "medium" | Matches Codex CLI default |
reasoningSummary |
"detailed" | "auto" | Matches Codex CLI default |
| gpt-5-codex config | (excluded) | Full support | opencode excludes gpt-5-codex from auto-config |
store |
true | false | Required for ChatGPT backend |
include |
(not set) | ["reasoning.encrypted_content"] |
Required for stateless operation |
- Plugin config:
~/.opencode/openai-codex-auth-config.json - Cache dir:
~/.opencode/cache/codex-instructions.md(Codex CLI instructions from GitHub)codex-instructions-meta.json(ETag + release tag for Codex instructions)opencode-codex.txt(OpenCode system prompt from GitHub, for verification)opencode-codex-meta.json(ETag for OpenCode prompt)
- Debug logs:
~/.opencode/logs/codex-plugin/(whenENABLE_PLUGIN_REQUEST_LOGGING=1) - OAuth callback:
http://localhost:1455/auth/callback
CODEX_MODE: Override config file (1=enable, 0=disable)ENABLE_PLUGIN_REQUEST_LOGGING: Enable detailed request logging (1=enable)
- Target: ES2022
- Module: ES2022 with bundler resolution
- Output:
./dist/ - Strict mode enabled
- Declaration files generated
- Source maps enabled
- Excludes:
test/,node_modules/,dist/
Production:
@openauthjs/openauth(OAuth PKCE implementation)
Development:
@opencode-ai/plugin(peer dependency)vitest(testing framework)- TypeScript
Zero external runtime dependencies - only uses Node.js built-ins for file I/O, HTTP, crypto.