All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).
Maintenance release: OAuth success page version sync.
- OAuth success banner: Updates the success page header to display the current release version.
Installer safety release: JSONC support, safe uninstall, and minimal reasoning clamp.
- JSONC-aware installer: preserves comments/formatting and prioritizes
opencode.jsoncoveropencode.json. - Safe uninstall:
--uninstallremoves only plugin entries + our model presets;--allremoves tokens/logs/cache. - Installer tests: coverage for JSONC parsing, precedence, uninstall safety, and artifact cleanup.
- Default config path: installer creates
~/.config/opencode/opencode.jsoncwhen no config exists. - Dependency:
jsonc-parseradded to keep JSONC updates robust and comment-safe.
- Minimal reasoning clamp:
minimalis now normalized tolowfor GPT‑5.x requests to avoid backend rejection.
Feature + reliability release: variants support, one-command installer, and auth/error handling fixes.
- One-command installer/update:
npx -y opencode-openai-codex-auth@latest(global config, backup, cache clear) with--legacyfor OpenCode v1.0.209 and below. - Modern variants config:
config/opencode-modern.jsonfor OpenCode v1.0.210+; legacy presets remain inconfig/opencode-legacy.json. - Installer CLI bundled as package bin for cross-platform use (Windows/macOS/Linux).
- Variants-aware request config: respects host-supplied
body.reasoning/providerOptions.openaibefore falling back to defaults. - OpenCode prompt source: updates to the current upstream repository (
anomalyco/opencode). - Docs/README: install-first layout with leaner guidance and explicit legacy path.
- Headless login fallback: missing
xdg-openno longer fails the OAuth flow; manual URL paste stays available. - Error handling alignment: refresh failures throw; usage-limit 404s map to retryable 429s where appropriate.
- AGENTS.md preservation: protected instruction markers stop accidental filtering of user instructions.
- Tool-call integrity: orphan outputs now match
local_shell_callandcustom_tool_call(Codex CLI parity); unmatched outputs preserved as assistant messages. - Logging noise: debug logging gated behind flags to prevent stdout bleed.
Feature release: GPT 5.2 Codex support and prompt alignment with latest Codex CLI.
- GPT 5.2 Codex model family: Full support for
gpt-5.2-codexwith presets:gpt-5.2-codex-low- Fast GPT 5.2 Codex responsesgpt-5.2-codex-medium- Balanced GPT 5.2 Codex tasksgpt-5.2-codex-high- Complex GPT 5.2 Codex reasoning & toolsgpt-5.2-codex-xhigh- Deep GPT 5.2 Codex long-horizon work
- New model family prompt:
gpt-5.2-codex_prompt.mdfetched from the latest Codex CLI release with its own cache file. - Test coverage: Added unit tests for GPT 5.2 Codex normalization, family selection, and reasoning behavior.
- Prompt selection alignment: GPT 5.2 general now uses
gpt_5_2_prompt.md(Codex CLI parity). - Reasoning configuration: GPT 5.2 Codex supports
xhighbut does not support"none";"none"auto-upgrades to"low"and"minimal"normalizes to"low". - Config presets:
config/opencode-legacy.jsonincludes the 22 pre-configured presets (adds GPT 5.2 Codex);config/opencode-modern.jsonprovides the variant-based setup. - Docs: Updated README/AGENTS/config docs to include GPT 5.2 Codex and new model family behavior.
Minor release: "none" reasoning effort support, orphaned function_call_output fix, and HTML version update.
- "none" reasoning effort support: GPT-5.1 and GPT-5.2 support
reasoning_effort: "none"which disables the reasoning phase entirely. This can result in faster responses when reasoning is not needed.gpt-5.2-none- GPT-5.2 with reasoning disabledgpt-5.1-none- GPT-5.1 with reasoning disabled
- 4 new unit tests for "none" reasoning behavior (now 197 total unit tests).
- Orphaned function_call_output 400 errors: Fixed API errors when conversation history contains
item_referencepointing to stored function calls. Previously, orphanedfunction_call_outputitems were only filtered when!body.tools. Now always handles orphans regardless of tools presence, and converts them to assistant messages to preserve context while avoiding API errors. - OAuth HTML version display: Updated version in oauth-success.html from 1.0.4 to 4.1.0.
getReasoningConfig()now detects GPT-5.1 general purpose models (not Codex variants) and allows "none" to pass through.- GPT-5.2 inherits "none" support as it's newer than GPT-5.1.
- Codex variants (gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini) do NOT support "none":
- Codex and Codex Max: "none" auto-converts to "low"
- Codex Mini: "none" auto-converts to "medium" (as before)
- Documentation updated with complete reasoning effort support matrix per model family.
- OpenAI API docs (
platform.openai.com/docs/api-reference/chat/create): "gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high." - Codex CLI (
codex-rs/protocol/src/openai_models.rs):ReasoningEffortenum includesNonevariant with#[serde(rename_all = "lowercase")]serialization to"none". - Codex CLI (
codex-rs/core/src/client.rs): Request builder passesReasoningEffort::Nonethrough to API without validation/rejection. - Codex CLI (
docs/config.md): Documentsmodel_reasoning_effort = "none"as valid config option.
- This plugin defaults to "medium" for better coding assistance; users must explicitly set "none" if desired.
Feature release: GPT 5.2 model support and image input capabilities.
- GPT 5.2 model family support: Full support for OpenAI's latest GPT 5.2 model with 4 reasoning level presets:
gpt-5.2-low- Fast responses with light reasoninggpt-5.2-medium- Balanced reasoning for general tasksgpt-5.2-high- Complex reasoning and analysisgpt-5.2-xhigh- Deep multi-hour analysis (same as Codex Max)
- Full image input support: All 16 model variants now include
modalities.input: ["text", "image"]enabling full multimodal capabilities - read screenshots, diagrams, UI mockups, and any image directly in OpenCode. - GPT 5.2 model family added to
codex.tswith dedicated prompt handling. - Test coverage: Updated integration tests to verify all 16 models (was 13), now 193 unit tests + 16 integration tests.
- Model ordering: Config now ordered by model family priority: GPT 5.2 → Codex Max → Codex → Codex Mini → GPT 5.1.
- Removed default presets: Removed
gpt-5.1-codex-maxandgpt-5.2(without reasoning suffix) to enforce explicit reasoning level selection. - Test script:
scripts/test-all-models.shnow uses local dist for testing and includes GPT 5.2 tests. - Documentation: Updated README with GPT 5.2 models, image support, and condensed config example.
- GPT 5.2 maps to
gpt-5.2API model with same reasoning options as Codex Max (low/medium/high/xhigh). getModelFamily()now returns"gpt-5.2"for GPT 5.2 models, using Codex Max prompts.getReasoningConfig()treats GPT 5.2 like Codex Max forxhighreasoning support.- Model normalization pattern matching updated to recognize GPT 5.2 before other patterns.
Bugfix release: Fixes compaction context loss, agent creation, and SSE/JSON response handling.
- Compaction losing context: v4.0.1 was too aggressive in filtering tool calls - it removed ALL
function_call/function_call_outputitems when tools weren't present. Now only orphaned outputs (without matching calls) are filtered, preserving matched pairs for compaction context. - Agent creation failing: The
/agent createcommand was failing with "Invalid JSON response" because we were returning SSE streams instead of JSON forgenerateText()requests. - SSE/JSON response handling: Properly detect original request intent -
streamText()requests get SSE passthrough,generateText()requests get SSE→JSON conversion.
gpt-5.1-chat-latestmodel support: Added to model map, normalizes togpt-5.1.
- Root cause of compaction issue: OpenCode sends
item_referencewithfc_*IDs for function calls. We filter these for stateless mode, but v4.0.1 then removed ALL tool items. Now we only remove orphanedfunction_call_outputitems (where no matchingfunction_callexists). - Root cause of agent creation issue: We were forcing
stream: truefor all requests and returning SSE for all responses. Now we capture originalstreamvalue before transformation and convert SSE→JSON only when original request wasn't streaming. - The Codex API always receives
stream: true(required), but response handling is based on original intent.
Bugfix release: Fixes API errors during summary/compaction and GitHub rate limiting.
- Orphaned
function_call_outputerrors: Fixed 400 errors during summary/compaction requests when OpenCode sendsitem_referencepointers to server-stored function calls. The plugin now filters outfunction_callandfunction_call_outputitems when no tools are present in the request. - GitHub API rate limiting: Added fallback mechanism when fetching Codex instructions from GitHub. If the API returns 403 (rate limit), the plugin now falls back to parsing the HTML releases page.
- Root cause: OpenCode's secondary model (gpt-5-nano) uses
item_referencewithfc_*IDs to reference stored function calls. Our plugin filtersitem_referencefor stateless mode (store: false), leavingfunction_call_outputorphaned. The Codex API rejects requests with orphaned outputs. - Fix: When
hasTools === false, filter out allfunction_callandfunction_call_outputitems from the input array. - GitHub fallback chain: API endpoint → HTML page → redirect URL parsing → HTML regex parsing.
Major release: Complete prompt engineering overhaul matching official Codex CLI behavior, with full GPT-5.1 Codex Max support.
- Full Codex Max support with dedicated prompt including frontend design guidelines
- Model-specific prompts matching Codex CLI's prompt selection logic
- GPT-5.0 → GPT-5.1 migration as legacy models are phased out
- Model-specific system prompts: Plugin now fetches the correct Codex prompt based on model family, matching Codex CLI's
model_family.rslogic:gpt-5.1-codex-max*→gpt-5.1-codex-max_prompt.md(117 lines, includes frontend design guidelines)gpt-5.1-codex*,gpt-5.1-codex-mini*→gpt_5_codex_prompt.md(105 lines, focused coding prompt)gpt-5.1*→gpt_5_1_prompt.md(368 lines, full behavioral guidance)
- New
ModelFamilytype ("codex-max" | "codex" | "gpt-5.1") for prompt selection. - New
getModelFamily()function to determine prompt selection based on normalized model name. - Model family now logged in request logs for debugging (
modelFamilyfield in after-transform logs). - 16 new unit tests for model family detection (now 191 total unit tests).
- Integration tests now verify correct model family selection (13 integration tests with family verification).
- Legacy GPT-5.0 models now map to GPT-5.1: All legacy
gpt-5model variants automatically normalize to theirgpt-5.1equivalents as GPT-5.0 is being phased out by OpenAI:gpt-5-codex→gpt-5.1-codexgpt-5→gpt-5.1gpt-5-mini,gpt-5-nano→gpt-5.1codex-mini-latest→gpt-5.1-codex-mini
- Lazy instruction loading: Instructions are now fetched per-request based on model family (not pre-loaded at initialization).
- Separate caching per model family: Each model family has its own cached prompt file:
codex-max-instructions.md+codex-max-instructions-meta.jsoncodex-instructions.md+codex-instructions-meta.jsongpt-5.1-instructions.md+gpt-5.1-instructions-meta.json
- Fixed OpenCode prompt cache URL to fetch from
devbranch instead of non-existentmainbranch. - Fixed model configuration test script to correctly identify model logs in multi-model sessions (opencode uses a small model like
gpt-5-nanofor title generation alongside the user's selected model).
This release brings full parity with Codex CLI's prompt engineering:
- Codex family (105 lines): Concise, tool-focused prompt for coding tasks
- Codex Max family (117 lines): Adds frontend design guidelines for UI work
- GPT-5.1 general (368 lines): Comprehensive behavioral guidance, personality, planning
- GPT 5.1 Codex Max support: normalization, per-model defaults, and new presets (
gpt-5.1-codex-max,gpt-5.1-codex-max-xhigh) with extended reasoning options (includingnone/xhigh) while keeping the 272k context / 128k output limits. - Typing and config support for new reasoning options (
none/xhigh, summaryoff/on) plus updated test matrix entries.
- Codex Mini clamping now downgrades unsupported
xhightohighand guards againstnone/minimalinputs. - Documentation, config guides, and validation scripts now reflect 13 verified GPT 5.1 variants (3 codex, 5 codex-max, 2 codex-mini, 3 general), including Codex Max. See README for details on pre-configured variants.
- GPT 5.1 model family support: normalization for
gpt-5.1,gpt-5.1-codex, andgpt-5.1-codex-miniplus new GPT 5.1-only presets in the canonicalconfig/opencode-legacy.json. - Documentation updates (README, docs, AGENTS) describing the 5.1 families, their reasoning defaults, and how they map to ChatGPT slugs and token limits.
- Model normalization docs and tests now explicitly cover both 5.0 and 5.1 Codex/general families and the two Codex Mini tiers.
- The legacy GPT 5.0 full configuration is now published separately; new installs should prefer the 5.1 presets in
config/opencode-legacy.json.
- Codex Mini support end-to-end: normalization to the
codex-mini-latestslug, proper reasoning defaults, and two new presets (gpt-5-codex-mini-medium/gpt-5-codex-mini-high). - Documentation & configuration updates describing the Codex Mini tier (200k input / 100k output tokens) plus refreshed totals (11 presets, 160+ unit tests).
- Prevented Codex Mini from inheriting the lightweight (
minimal) reasoning profile used bygpt-5-mini/nano, ensuring the API always receives supported effort levels.
- Codex-style usage-limit messaging that mirrors the 5-hour and weekly windows reported by the Codex CLI.
- Documentation guidance noting that OpenCode's context auto-compaction and usage sidebar require the canonical
config/opencode-legacy.json.
- Prompt caching now relies solely on the host-supplied
prompt_cache_key; conversation/session headers are forwarded only when OpenCode provides one. - CODEX_MODE bridge prompt refreshed to the newest Codex CLI release so tool awareness stays in sync.
- Clarified README, docs, and configuration references so the canonical config matches shipped behaviour.
- Pinned
hono(4.10.4) andvite(7.1.12) to resolve upstream security advisories.
- Comprehensive compliance documentation (ToS guidance, security, privacy) and a full user/developer doc set.
- Per-model configuration lookup, stateless multi-turn conversations, case-insensitive model normalization, and GitHub instruction caching.
- README cache-clearing snippet now runs in a subshell from the home directory to avoid path issues while removing cached plugin files.
- Enhanced CODEX_MODE bridge prompt with Task tool and MCP awareness plus ETag-backed verification of OpenCode system prompts.
- Request transformation made async to support prompt verification caching; AGENTS.md renamed to provide cross-agent guidance.
- Full TypeScript rewrite with strict typing, 123 automated tests, and nine pre-configured model variants matching the Codex CLI.
- CODEX_MODE introduced (enabled by default) with a lightweight bridge prompt and configurability via config file or
CODEX_MODEenv var.
- Library reorganized into semantic folders (auth, prompts, request, etc.) and OAuth flow polished with the new success page.
- Major internal refactor splitting the runtime into focused modules (logger, request/response handlers) and removing legacy debug output.
- ETag-based GitHub caching for Codex instructions and release-tag tracking for more stable prompt updates.
- Default model fallback, text verbosity initialization, and standardized error logging prefixes.
- README clarifications: opencode auto-installs plugins, config locations, and streamlined quick-start instructions.
- Initial production release with ChatGPT Plus/Pro OAuth support, tool remapping, auto-updating Codex instructions, and zero runtime dependencies.