[wip] test Oauth provider revamp#588
[wip] test Oauth provider revamp#588rahulkarajgikar wants to merge 27 commits intotruffle-ai:mainfrom
Conversation
|
@rahulkarajgikar is attempting to deploy a commit to the Shaunak's projects Team on Vercel. A member of the Team first needs to authorize it. |
📝 WalkthroughWalkthroughExtends Dexto's authentication infrastructure with multi-method provider connections, introduces support for new LLM providers (minimax variants, zhipuai, zai, moonshotai), renames cloud providers (vertex→google-vertex, bedrock→amazon-bedrock, glm→zhipuai), implements LLM authentication profiles persistence with runtime resolution, and adds an interactive Changes
Sequence Diagram(s)sequenceDiagram
participant User as User (CLI)
participant CLI as Connect Command
participant Prompts as Interactive Prompts
participant OAuth as OAuth Endpoint
participant Store as Auth Profiles Store
participant Factory as LLM Factory
User->>CLI: pnpm dexto connect (interactive)
CLI->>Prompts: Select provider from CONNECT_PROVIDERS
Prompts-->>CLI: provider selected
alt Single Method
CLI->>Prompts: Auto-select method
else Multiple Methods
CLI->>Prompts: Select auth method
Prompts-->>CLI: method chosen
end
alt API Key / Token Method
CLI->>Prompts: Prompt for credentials
Prompts-->>CLI: credentials entered
else OAuth Method (OpenAI/MiniMax)
CLI->>OAuth: Initiate device-code flow
OAuth-->>CLI: user_code + verification_url
CLI->>Prompts: Display code to user
Prompts-->>User: Open browser for auth
User->>User: Authorize on provider site
CLI->>OAuth: Poll token endpoint
OAuth-->>CLI: access_token + refresh_token
end
CLI->>Store: upsertLlmAuthProfile(profile)
Store-->>CLI: Profile persisted (0o600)
CLI->>Store: setDefaultLlmAuthProfile(provider)
Store-->>CLI: Default set
CLI->>User: ✓ Connected [provider:method]
Note over Factory: Runtime: When LLM needed
Factory->>Store: resolveRuntimeAuth(provider, model)
Store-->>Factory: LlmRuntimeAuthOverrides
Factory->>Factory: Inject apiKey/headers/fetch
sequenceDiagram
participant Client as Client Request
participant Factory as LLM Factory
participant Resolver as Auth Resolver
participant Store as Credentials Store
participant OAuth as OAuth Token Endpoint
participant LLM as LLM Provider
Client->>Factory: createVercelModel(config, {authResolver})
Factory->>Resolver: resolveRuntimeAuth({provider, model})
alt Has Stored Credential
Resolver->>Store: Get credential for provider
Store-->>Resolver: credential (api_key|token|oauth)
alt OAuth Credential Expired
Resolver->>OAuth: Exchange refresh_token
OAuth-->>Resolver: new access_token
Resolver->>Store: Update credential with refreshed token
Store-->>Resolver: ✓ Updated
end
else No Credential
Resolver-->>Factory: null (fallback to env vars)
end
Resolver-->>Factory: LlmRuntimeAuthOverrides
Factory->>LLM: Call with injected {apiKey, baseURL, headers, fetch}
LLM-->>Factory: Response
Factory-->>Client: Streamed completion
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (16)
vitest.integration.config.ts (1)
6-38: Extract shared aliases to avoid duplication withvitest.config.ts.The entire
aliasobject in this file is now byte-for-byte identical to the one invitest.config.ts(lines 6-38). Any new package alias must be added to both files, which is a maintenance risk.Extract the shared alias map into a dedicated helper so both configs import it:
♻️ Proposed refactor
Create a new file
vitest.aliases.tsat the repo root:// vitest.aliases.ts import path from 'path'; export const workspaceAliases: Record<string, string> = { '@dexto/core/utils/path.js': path.resolve(__dirname, 'packages/core/src/utils/path.ts'), '@dexto/core': path.resolve(__dirname, 'packages/core/src/index.ts'), '@dexto/agent-config': path.resolve(__dirname, 'packages/agent-config/src/index.ts'), '@dexto/agent-management': path.resolve(__dirname, 'packages/agent-management/src/index.ts'), '@dexto/tools-builtins': path.resolve(__dirname, 'packages/tools-builtins/src/index.ts'), '@dexto/tools-filesystem': path.resolve(__dirname, 'packages/tools-filesystem/src/index.ts'), '@dexto/tools-lifecycle': path.resolve(__dirname, 'packages/tools-lifecycle/src/index.ts'), '@dexto/tools-plan': path.resolve(__dirname, 'packages/tools-plan/src/index.ts'), '@dexto/tools-process': path.resolve(__dirname, 'packages/tools-process/src/index.ts'), '@dexto/tools-todo': path.resolve(__dirname, 'packages/tools-todo/src/index.ts'), '@dexto/image-local': path.resolve(__dirname, 'packages/image-local/src/index.ts'), '@dexto/image-logger-agent': path.resolve(__dirname, 'packages/image-logger-agent/src/index.ts'), '@dexto/orchestration': path.resolve(__dirname, 'packages/orchestration/src/index.ts'), '@dexto/analytics': path.resolve(__dirname, 'packages/analytics/src/index.ts'), '@dexto/storage/schemas': path.resolve(__dirname, 'packages/storage/src/schemas.ts'), '@dexto/storage': path.resolve(__dirname, 'packages/storage/src/index.ts'), };Then in both
vitest.integration.config.tsandvitest.config.ts:import { defineConfig } from 'vitest/config'; import path from 'path'; +import { workspaceAliases } from './vitest.aliases'; export default defineConfig({ resolve: { - alias: { - '@dexto/core/utils/path.js': path.resolve(__dirname, 'packages/core/src/utils/path.ts'), - // ... (all duplicated entries) - }, + alias: workspaceAliases, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vitest.integration.config.ts` around lines 6 - 38, The alias map in vitest.integration.config.ts is duplicated from vitest.config.ts; extract it into a shared constant (e.g., workspaceAliases) exported from a new vitest.aliases.ts and import it into both configs so they reuse the same map. Create vitest.aliases.ts that builds and exports workspaceAliases (using path.resolve for each package path), then replace the inline alias object in vitest.integration.config.ts and vitest.config.ts with an import of workspaceAliases and use that variable as the alias field. Ensure the exported name matches the import in both configs (workspaceAliases) and keep the '@dexto/core/utils/path.js' special key as-is.packages/image-local/test/import.integration.test.ts (1)
50-50: Consider documenting why 60 s is necessary.A 4× timeout increase is likely needed to cover TypeScript source resolution and its transitive imports during the integration test, but an inline comment explaining the reason (e.g., slow TypeScript source import chain) would help future readers understand whether the ceiling can be lowered later.
💡 Suggested comment
- }, 60_000); + // Timeout increased to 60 s: Vitest resolves the full TS source graph of image-local and its + // transitive workspace packages at runtime, which is significantly slower than loading pre-built dist. + }, 60_000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/image-local/test/import.integration.test.ts` at line 50, Add an inline comment next to the 60_000 test timeout in the test declaration in import.integration.test.ts explaining why the 60s ceiling is required (e.g., to allow slow TypeScript source resolution and transitive imports during this integration test), so future readers know this is intentional and can reassess if performance improves; update the comment wording to mention TypeScript resolution and transitive import chain as the reason.packages/image-logger-agent/test/import.integration.test.ts (1)
22-22: 60-second timeout is 4× the previous value — confirm the root cause.A 60 s ceiling for a single import-validation test is unusually high. If it was bumped to fix flaky timeouts, there may be a latent initialization performance issue (e.g., slow cold-start, uncached resolution, network I/O) worth investigating and resolving directly rather than absorbing with a larger timeout.
packages/cli/src/cli/ink-cli/utils/llm-provider-display.ts (1)
21-21: Optional: clean up double-parentheses in themoonshotai-cndisplay name.
'Moonshot AI (Kimi) (CN)'nests two parenthetical groups, which looks awkward in the model picker. Consider'Moonshot AI - Kimi (CN)'.✏️ Proposed fix
- 'moonshotai-cn': 'Moonshot AI (Kimi) (CN)', + 'moonshotai-cn': 'Moonshot AI - Kimi (CN)',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/ink-cli/utils/llm-provider-display.ts` at line 21, Update the display name for the provider key 'moonshotai-cn' in llm-provider-display.ts to remove the nested parentheses; change the string value from "Moonshot AI (Kimi) (CN)" to "Moonshot AI - Kimi (CN)" so the model picker shows a cleaner label for the moonshotai-cn provider.packages/cli/src/cli/auth/service.ts (2)
42-45: UsegetDextoGlobalPath(AUTH_DIR)instead ofgetDextoGlobalPath(AUTH_DIR, '').The
filenameparameter is optional. Passing an empty string to derive the directory is non-idiomatic; simply omit the second argument to get the base directory path directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/auth/service.ts` around lines 42 - 45, Replace the non-idiomatic call getDextoGlobalPath(AUTH_DIR, '') used to compute dextoDir with a simple getDextoGlobalPath(AUTH_DIR); update the variable dextoDir assignment (near authPath and the fs.mkdir call) to use getDextoGlobalPath(AUTH_DIR) so the base directory path is derived via the optional filename parameter omission instead of passing an empty string.
10-11: Centralize auth path constants to prevent silent authentication failures.
AUTH_DIR = 'auth'andAUTH_CONFIG_FILE = 'dexto.json'are defined identically in bothpackages/cli/src/cli/auth/service.ts(lines 10–11) andpackages/agent-management/src/utils/dexto-auth.ts(lines 12–13). If these values drift in the future, the CLI would write auth config to a path thatagent-managementno longer reads, causing unauthenticated state without clear errors.Export these constants from
packages/agent-management/src/utils/dexto-auth.tsand import them in the CLI service to maintain a single source of truth.♻️ Proposed fix
In
packages/agent-management/src/utils/dexto-auth.ts, export the constants:-const AUTH_DIR = 'auth'; -const AUTH_CONFIG_FILE = 'dexto.json'; +export const AUTH_DIR = 'auth'; +export const AUTH_CONFIG_FILE = 'dexto.json';In
packages/cli/src/cli/auth/service.ts, import instead of redefining:-const AUTH_DIR = 'auth'; -const AUTH_CONFIG_FILE = 'dexto.json'; +import { AUTH_DIR, AUTH_CONFIG_FILE } from '@dexto/agent-management';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/auth/service.ts` around lines 10 - 11, Replace the duplicated AUTH_DIR and AUTH_CONFIG_FILE constants by exporting them from the existing dexto-auth module and importing them into the CLI auth service: in packages/agent-management/src/utils/dexto-auth.ts export the AUTH_DIR and AUTH_CONFIG_FILE symbols (ensure they are exported), then remove the duplicate consts from packages/cli/src/cli/auth/service.ts and import { AUTH_DIR, AUTH_CONFIG_FILE } from that dexto-auth module so both modules use the single shared source of truth.packages/cli/src/cli/commands/connect/minimax-portal.ts (2)
14-18: PreferURLSearchParamsover manual form encoding.
openai-codex.ts(line 148) already usesnew URLSearchParams({...}).toString()for the same purpose. UsingURLSearchParamsis simpler, handles edge cases (special characters), and reduces duplication.♻️ Proposed fix
-function toFormUrlEncoded(data: Record<string, string>): string { - return Object.entries(data) - .map(([key, value]) => `${encodeURIComponent(key)}=${encodeURIComponent(value)}`) - .join('&'); -}Then replace usages:
- body: toFormUrlEncoded({ + body: new URLSearchParams({ response_type: 'code', ... - }), + }).toString(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/commands/connect/minimax-portal.ts` around lines 14 - 18, Replace the manual toFormUrlEncoded implementation with URLSearchParams: change the function to use new URLSearchParams(data).toString() (keeping the same function name toFormUrlEncoded so callers don’t change) and update any call sites if needed; this aligns with openai-codex.ts usage and ensures correct encoding of special characters and less duplicated logic.
27-34:toExpiresAtheuristic is clever but undocumented at the call site.The 10-billion threshold works for distinguishing seconds from epoch-ms, but callers need to know the semantics. Consider adding a brief inline comment at each call site, or narrowing the contract so the function name reflects the ambiguity (e.g.,
normalizeExpiryToEpochMs).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/commands/connect/minimax-portal.ts` around lines 27 - 34, The helper toExpiresAt is ambiguous about accepting seconds vs epoch-ms; rename it to normalizeExpiryToEpochMs (or alternatively add a clear JSDoc and inline comments at every call site) so callers know it accepts either seconds or epoch-ms and returns epoch milliseconds; update the function name (toExpiresAt -> normalizeExpiryToEpochMs), adjust all call sites in minimax-portal.ts (and any other imports), and add a short JSDoc on the function describing the 10_000_000_000 threshold and return value.packages/cli/src/cli/commands/connect/openai-codex.ts (1)
128-132: Hardcoded +3000ms on top of the server-specified poll interval.The extra 3 seconds added to every pending-poll wait (line 130) makes each retry cycle
pollIntervalMs + 3000. If the server specifies a 5-second interval, the effective wait becomes 8 seconds per cycle. This is conservative, which is fine for avoiding rate limits, but worth documenting the rationale with a brief comment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/commands/connect/openai-codex.ts` around lines 128 - 132, The extra fixed delay of 3000ms added to each pending poll (in the block that checks response.status === 403 || response.status === 404) causes waits of pollIntervalMs + 3000; update the code to either remove or make this offset configurable and add a brief comment explaining why any additional backoff exists (e.g., to reduce tight retry loops and avoid rate limits or account for server processing time). Locate the pending-poll handling around response.status and pollIntervalMs and either (a) replace the hardcoded +3000ms with a named constant or configuration (e.g., PENDING_POLL_BACKOFF_MS) or (b) remove it and document that pollIntervalMs should be trusted; in all cases add a one-line comment describing the rationale for the chosen approach.packages/cli/src/cli/commands/connect/oauth-error.ts (1)
7-27:statusfield only extracted when it's a string — numeric status codes will be silently skipped.
addFieldcheckstypeof value !== 'string'(line 14), so a numericstatusfield (e.g.,{ "status": 403 }) in the error payload will be ignored. This is likely fine sinceresponse.statusis already captured in the status line, but worth confirming the intent is to only display string-typed fields.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/commands/connect/oauth-error.ts` around lines 7 - 27, The helper toWhitelistedOauthErrorText currently skips non-string fields because addField only accepts strings, causing numeric status codes (e.g., status: 403) to be omitted; update the logic in toWhitelistedOauthErrorText/addField so that for the 'status' (and optionally 'status_msg') keys you coerce number values to strings (trim and shorten as done for strings) before pushing to parts, while preserving the existing behavior for other keys to only accept strings.packages/agent-management/src/auth/runtime-auth-resolver.test.ts (1)
29-52: Minor:originalFetchrestore is guarded by truthiness — safe in practice but slightly fragile.Line 30:
if (originalFetch)means ifglobalThis.fetchwas somehowundefinedbefore the test, the mock wouldn't be cleaned up. In practice Node 18+ always hasfetch, so this is fine. A more defensive pattern would unconditionally restore:- if (originalFetch) { - globalThis.fetch = originalFetch; - } + globalThis.fetch = originalFetch!;Not blocking — the current code works correctly in all realistic environments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/agent-management/src/auth/runtime-auth-resolver.test.ts` around lines 29 - 52, The test teardown in afterEach currently restores fetch only when originalFetch is truthy (if (originalFetch)), which is fragile; change it to unconditionally restore the previous fetch reference by assigning globalThis.fetch = originalFetch (or, if you prefer defensive typing, check typeof originalFetch !== 'undefined' before assigning) so the mocked fetch is always cleaned up — update the afterEach block that references originalFetch and globalThis.fetch and keep the rest of the cleanup (env resets, fs.rm, vi.restoreAllMocks) unchanged.packages/cli/src/cli/commands/connect/minimax-portal.test.ts (2)
26-33: Redundantvi.resetAllMocks()in bothbeforeEachandafterEach.
afterEachalready resets mocks after each test, so thebeforeEachcall is redundant.♻️ Proposed cleanup
- beforeEach(() => { - vi.resetAllMocks(); - }); - afterEach(() => { global.fetch = originalFetch; vi.resetAllMocks(); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/commands/connect/minimax-portal.test.ts` around lines 26 - 33, Remove the redundant mock reset in the beforeEach and rely on the afterEach to restore state: delete the vi.resetAllMocks() call inside beforeEach so only afterEach performs vi.resetAllMocks() and restores global.fetch to originalFetch; update minimax-portal.test.ts by keeping the afterEach block with global.fetch = originalFetch and vi.resetAllMocks(), and remove the beforeEach vi.resetAllMocks() invocation.
35-88: Only the happy path is exercised.The OAuth device-code flow has several distinct failure modes worth covering (state mismatch, missing
user_code/verification_uri, non-ok code response,status: 'error'from the token endpoint, poll-loop timeout). These are especially important given the PKCE state verification inloginMiniMaxPortalDeviceCode.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/commands/connect/minimax-portal.test.ts` around lines 35 - 88, Add unit tests in minimax-portal.test.ts around loginMiniMaxPortalDeviceCode to cover non-happy paths: create separate cases that (1) return a different state from /oauth/code and assert the callback rejects due to state mismatch, (2) return a non-ok or malformed response from /oauth/code (missing user_code or verification_uri) and assert an error is thrown, (3) return a non-ok HTTP or { status: 'error' } payload from /oauth/token and assert callback rejects, and (4) simulate the polling loop timing out (e.g., provide expired_in/interval such that callback fails) and assert timeout behavior; implement each by altering the mockFetch behavior used in the existing test to return the appropriate responses and assert thrown errors or rejected promises from the flow.callback() call.packages/core/src/llm/index.ts (1)
7-7: Prefer explicit named exports forauth/typesto keep the barrel surface intentional.The coding guidelines state "avoid wildcard exports; prefer explicit named exports." Since
auth/types.tscurrently exports only three symbols (LlmAuthResolver,LlmRuntimeAuthOverrides,ResolveLlmRuntimeAuthInput), a named re-export would prevent unintentional surface expansion as the auth module grows. This also aligns with the file's existing pattern of using explicit exports for specific modules (seeopenrouter-model-registryandauto-update).♻️ Suggested change
-export * from './auth/types.js'; +export type { + LlmAuthResolver, + LlmRuntimeAuthOverrides, + ResolveLlmRuntimeAuthInput, +} from './auth/types.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/llm/index.ts` at line 7, Replace the wildcard export of auth/types with an explicit named re-export: export the three known symbols LlmAuthResolver, LlmRuntimeAuthOverrides, and ResolveLlmRuntimeAuthInput from './auth/types.js' so the barrel surface remains intentional and cannot accidentally expand if auth/types adds more exports later; locate the current export line that re-exports from './auth/types.js' in packages/core/src/llm/index.ts and change it to explicitly export those three symbols.packages/agent-management/src/auth/llm-profiles.test.ts (1)
34-87: Good coverage of the core store operations.The three tests cover the primary happy-path and corruption-recovery scenarios well.
One gap:
loadLlmAuthProfilesStorealso handles the case where the file contains valid JSON but fails Zod schema validation (per thesafeParsepath inllm-profiles.tslines 214–217). Consider adding a test case for that — e.g. writing{"version": 999}or{"unexpected": true}and asserting the same backup-and-empty-store behavior as the corrupt-JSON test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/agent-management/src/auth/llm-profiles.test.ts` around lines 34 - 87, Add a new test that simulates a file containing valid JSON that fails Zod validation and assert it is backed up and replaced with an empty store; specifically, write a JSON like {"version":999} or {"unexpected":true} to the path returned by getLlmAuthProfilesPath(), call loadLlmAuthProfilesStore(), then assert the returned store has defaults and profiles empty and version 1 (or expected default), that the original file was removed, that a backup file named starting with "llm-profiles.json.corrupt." exists and contains the original invalid JSON, and that the backup file permissions are 0o600—mirroring the assertions used in the existing corrupt-JSON test that exercises loadLlmAuthProfilesStore().packages/agent-management/src/auth/llm-profiles.ts (1)
26-27: Lock stale timeout should exceed lock acquisition timeout.
STORE_LOCK_STALE_MS === STORE_LOCK_TIMEOUT_MSmeans a legitimately held lock can be removed by a waiting process at the exact moment it times out, creating a window where two processes are inside the critical section simultaneously. Set stale timeout to at least2×the lock timeout so a valid lock is never prematurely reaped.Proposed fix
const STORE_LOCK_TIMEOUT_MS = 15_000; -const STORE_LOCK_STALE_MS = 15_000; +const STORE_LOCK_STALE_MS = 30_000;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/agent-management/src/auth/llm-profiles.ts` around lines 26 - 27, STORE_LOCK_STALE_MS is currently equal to STORE_LOCK_TIMEOUT_MS which allows a race where a waiting process may reap a still-valid lock; change the stale timeout to be at least 2× the acquisition timeout by setting STORE_LOCK_STALE_MS to STORE_LOCK_TIMEOUT_MS * 2 (or a constant >= 2×) so a legitimately held lock cannot be prematurely reaped; update the constants where STORE_LOCK_TIMEOUT_MS and STORE_LOCK_STALE_MS are defined accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@feature-plans/oauth-and-more-providers/PLAN.md`:
- Line 181: Fix the Markdown list indentation inconsistency: locate the bullet
that begins "Server-side two-phase OAuth orchestration (authorize → callback)
and pending-state handling:" and any sibling sub-bullets that currently use 1 or
3 leading spaces, and normalize them so top-level bullets start with no leading
space and any explicit sub-bullets use exactly 2 spaces of indentation; update
the sub-bullet lines that are at 1 and 3 spaces to use 2 spaces to match the
other siblings.
- Around line 12-17: Replace all machine-local paths like
`~/Projects/external/openclaw`, `~/Projects/external/opencode`,
`~/Projects/external/pi-mono`, and `~/Projects/external/models.dev` with
portable references: use canonical public repository URLs or GitHub permalinks
to the exact commits/paths for those projects, or add brief clone/checkout
instructions and a NOTE describing required access; also update links referenced
by `PROVIDER_COVERAGE.md` and `NPM_TRANSPORT_COVERAGE.md` to use the same
public/permalink style so readers and agents can follow them reliably.
In `@feature-plans/oauth-and-more-providers/PROVIDER_COVERAGE.md`:
- Around line 55-59: In PROVIDER_COVERAGE.md remove
developer-local/home-prefixed paths and replace them with repo-relative
references or plain filenames; e.g., change
"~/Projects/external/openclaw/src/agents/model-auth.ts" to
"src/agents/model-auth.ts" (or simply "src/agents/model-auth.ts —
auth/profile/env resolution"), and do the same for "models-config.providers.ts
(resolveImplicitProviders())" and "model-selection.ts (normalizeProviderId())";
also scan the rest of the file for other occurrences of machine-specific paths
and update them to repo-relative paths or concise file+function descriptions to
keep the doc portable and user-focused.
In `@packages/agent-management/src/auth/llm-profiles.ts`:
- Around line 249-251: The sync loader currently swallows all errors and always
returns createEmptyStore(); update its catch to mirror the async loader: inspect
the thrown error (capture it as err), if err.code === 'ENOENT' return
createEmptyStore(), otherwise rethrow the error so permission/IO errors are not
masked; apply this change in the sync loader function (the counterpart to the
async loader around line 220) and keep createEmptyStore() behavior only for
missing-file ENOENT cases.
In `@packages/agent-management/src/utils/api-key-resolver.ts`:
- Around line 26-28: The mapping for 'kimi-for-coding' only lists 'KIMI_API_KEY'
but needs to fall back to 'MOONSHOT_API_KEY' like 'moonshotai' and
'moonshotai-cn' (or clarify intentional separation); update the api key mapping
in api-key-resolver.ts so the 'kimi-for-coding' entry includes
['KIMI_API_KEY','MOONSHOT_API_KEY'] to allow the shared credential, referencing
the 'kimi-for-coding' key in the mapping object, or alternatively add a comment
above the 'kimi-for-coding' entry explaining it intentionally uses a separate
key space if that is by design.
- Around line 18-28: The zai and zai-coding-plan entries in the API key mapping
only list 'ZHIPU_API_KEY' so they fail when only 'ZHIPUAI_API_KEY' is present;
update the mapping object in api-key-resolver (the provider -> env keys map) to
add 'ZHIPUAI_API_KEY' as a fallback for the 'zai' and 'zai-coding-plan' keys
(make them mirror the arrays used by 'zhipuai'/'zhipuai-coding-plan' and 'glm').
Locate the mapping variable (the provider map object) and change the arrays for
'zai' and 'zai-coding-plan' to ['ZHIPU_API_KEY', 'ZHIPUAI_API_KEY'].
In `@packages/cli/src/cli/commands/connect/index.ts`:
- Around line 293-302: The guided setup currently calls
open(`https://models.dev/${provider.modelsDevProviderId}`) directly which can
throw in headless/no-browser environments; wrap the open() call in a try/catch
(mirroring the earlier best‑effort pattern) so failures are logged/ignored
rather than aborting the command. Locate the block handling shouldOpenDocs (the
p.confirm result and p.isCancel check) and around the open() call for
provider.modelsDevProviderId add error handling to swallow or log the error and
continue.
- Around line 42-45: The handleConnectCommand signature currently accepts an
optional options object with an optional interactive flag; change it so options
is a required parameter defaulted in the signature (e.g. function
handleConnectCommand(options = { interactive: true }) ) and make the interactive
property required on that object, then update the internal check to use
options.interactive directly (remove optional chaining). Update any call sites
if necessary to rely on the defaulted parameter. This touches the
handleConnectCommand function and its use of options?.interactive.
- Around line 65-68: The code uses a non-null assertion provider.methods[0]!.id
when computing methodId; replace it with a safe guard: retrieve const
firstMethod = provider.methods[0]; if methods.length === 1 but firstMethod is
undefined, throw or return a clear error, otherwise use firstMethod.id; leave
the await p.select(...) branch intact for the multi-method case. Update the
methodId assignment to use the guarded firstMethod.id to avoid the `!` operator.
In `@packages/cli/src/cli/commands/connect/minimax-portal.test.ts`:
- Line 47: The code-endpoint mock in minimax-portal.test.ts sets expired_in to a
millisecond epoch (Date.now() + 60_000) which is inconsistent with the token
endpoint and OAuth seconds semantics; change the mock's expired_in value used in
the code-endpoint (the expired_in field in the mocked response) to 60 (seconds)
so toExpiresAt and polling/timeout logic treat it correctly and match the
token-endpoint's 3600 representation.
In `@packages/cli/src/cli/commands/connect/minimax-portal.ts`:
- Around line 105-107: The code sets expiresAt = toExpiresAt(oauth.expired_in ??
0) which passes 0 when expired_in is missing, causing toExpiresAt to return
Date.now() and the polling loop in pollToken() to never run; change this to use
a reasonable default expiry when oauth.expired_in is undefined or falsy (e.g.,
const DEFAULT_OAUTH_EXPIRES_MS = 60000) and call toExpiresAt(oauth.expired_in ??
DEFAULT_OAUTH_EXPIRES_MS) (and add the constant near pollStart/pollIntervalMs);
ensure pollIntervalMs logic stays the same and that pollToken() continues to use
expiresAt as the deadline.
In `@packages/cli/src/cli/commands/connect/openai-codex.ts`:
- Around line 87-94: The code casts deviceResponse.json() to deviceData but
doesn't validate required fields: explicitly check that
deviceData.device_auth_id and deviceData.user_code are present and non-empty
before using them (and treat interval via Number.parseInt safely); if either is
missing, log/throw a clear error and stop the flow instead of proceeding to
compute intervalSec/pollIntervalMs and sending a poll request with "undefined".
Update the deviceData handling around the deviceResponse.json() call and the
subsequent intervalSec/pollIntervalMs logic to validate types/values and fail
fast with a descriptive message when required fields are absent.
In `@packages/cli/src/cli/commands/init-app.ts`:
- Around line 23-25: The duplicate type-guard isLLMProvider (takes unknown and
checks typeof + includes against LLM_PROVIDERS) in init-app.ts should be removed
and the canonical, defensively-typed definition exported from provider-setup.ts;
update provider-setup.ts to export its isLLMProvider, remove the local function
in packages/cli/src/cli/commands/init-app.ts, and import the exported
isLLMProvider from packages/cli/src/cli/utils/provider-setup.ts where referenced
so the code uses the single shared implementation.
- Around line 37-42: The explicit generic on p.group is forcing fields like
llmProvider/llmApiKey/createExampleFile to include symbol unions and prevents
the helper from auto-excluding the cancellation symbol; remove the explicit
generic parameter on the p.group(...) call so TypeScript can infer the correct
types (or if you must keep a generic, use types without symbol like llmProvider:
string, createExampleFile: boolean) and rely on the existing onCancel handler
for cancellation; update any code that assumed the widened symbol unions (e.g.,
the answers variable) to use the inferred types returned by p.group.
In `@packages/core/src/utils/api-key-resolver.ts`:
- Around line 22-25: Add a brief clarifying comment next to the API key mapping
entries for the providers 'zai' and 'zhipuai' explaining why 'zai' only lists
'ZHIPU_API_KEY' while 'zhipuai' also includes 'ZHIPUAI_API_KEY'; update the
mapping block containing the symbols 'zai', 'zhipuai', 'zhipuai-coding-plan',
'zai-coding-plan' to note the intentional difference (e.g., that 'zai' only
accepts the legacy ZHIPU_API_KEY or that ZHIPUAI_API_KEY is not applicable for
that provider) so future maintainers understand the rationale.
In `@packages/server/src/hono/routes/llm-connect.ts`:
- Around line 112-132: The route setDefaultRoute currently lacks error handling
so errors thrown by setDefaultLlmAuthProfile bubble up as 500s; update the route
handler to catch errors from setDefaultLlmAuthProfile, inspect the error
message/type and rethrow a DextoRuntimeError with ErrorType.NOT_FOUND for
missing profiles and ErrorType.CONFLICT for provider mismatches (so
mapErrorTypeToStatus will return 404/409), and ensure the thrown
DextoRuntimeError includes the standard error fields (code, message, scope,
type, severity, endpoint, method). Also update the setDefaultRoute OpenAPI
responses to include 404 and 409 entries using the shared standard error schema
(code:string, message:string, scope:string, type:ErrorType, severity:'error',
endpoint:string, method:string) so clients receive structured errors instead of
opaque 500s.
---
Duplicate comments:
In `@packages/agent-management/src/utils/dexto-auth.ts`:
- Around line 12-13: AUTH_DIR and AUTH_CONFIG_FILE are duplicated here; remove
the local definitions in dexto-auth.ts and import the single source of truth
from the existing auth/service module (the constants defined in
auth/service.ts). Update any usage in this file to reference the imported
AUTH_DIR and AUTH_CONFIG_FILE symbols instead of redeclaring them, ensuring only
one shared constant definition remains across the codebase.
---
Nitpick comments:
In `@packages/agent-management/src/auth/llm-profiles.test.ts`:
- Around line 34-87: Add a new test that simulates a file containing valid JSON
that fails Zod validation and assert it is backed up and replaced with an empty
store; specifically, write a JSON like {"version":999} or {"unexpected":true} to
the path returned by getLlmAuthProfilesPath(), call loadLlmAuthProfilesStore(),
then assert the returned store has defaults and profiles empty and version 1 (or
expected default), that the original file was removed, that a backup file named
starting with "llm-profiles.json.corrupt." exists and contains the original
invalid JSON, and that the backup file permissions are 0o600—mirroring the
assertions used in the existing corrupt-JSON test that exercises
loadLlmAuthProfilesStore().
In `@packages/agent-management/src/auth/llm-profiles.ts`:
- Around line 26-27: STORE_LOCK_STALE_MS is currently equal to
STORE_LOCK_TIMEOUT_MS which allows a race where a waiting process may reap a
still-valid lock; change the stale timeout to be at least 2× the acquisition
timeout by setting STORE_LOCK_STALE_MS to STORE_LOCK_TIMEOUT_MS * 2 (or a
constant >= 2×) so a legitimately held lock cannot be prematurely reaped; update
the constants where STORE_LOCK_TIMEOUT_MS and STORE_LOCK_STALE_MS are defined
accordingly.
In `@packages/agent-management/src/auth/runtime-auth-resolver.test.ts`:
- Around line 29-52: The test teardown in afterEach currently restores fetch
only when originalFetch is truthy (if (originalFetch)), which is fragile; change
it to unconditionally restore the previous fetch reference by assigning
globalThis.fetch = originalFetch (or, if you prefer defensive typing, check
typeof originalFetch !== 'undefined' before assigning) so the mocked fetch is
always cleaned up — update the afterEach block that references originalFetch and
globalThis.fetch and keep the rest of the cleanup (env resets, fs.rm,
vi.restoreAllMocks) unchanged.
In `@packages/cli/src/cli/auth/service.ts`:
- Around line 42-45: Replace the non-idiomatic call getDextoGlobalPath(AUTH_DIR,
'') used to compute dextoDir with a simple getDextoGlobalPath(AUTH_DIR); update
the variable dextoDir assignment (near authPath and the fs.mkdir call) to use
getDextoGlobalPath(AUTH_DIR) so the base directory path is derived via the
optional filename parameter omission instead of passing an empty string.
- Around line 10-11: Replace the duplicated AUTH_DIR and AUTH_CONFIG_FILE
constants by exporting them from the existing dexto-auth module and importing
them into the CLI auth service: in
packages/agent-management/src/utils/dexto-auth.ts export the AUTH_DIR and
AUTH_CONFIG_FILE symbols (ensure they are exported), then remove the duplicate
consts from packages/cli/src/cli/auth/service.ts and import { AUTH_DIR,
AUTH_CONFIG_FILE } from that dexto-auth module so both modules use the single
shared source of truth.
In `@packages/cli/src/cli/commands/connect/minimax-portal.test.ts`:
- Around line 26-33: Remove the redundant mock reset in the beforeEach and rely
on the afterEach to restore state: delete the vi.resetAllMocks() call inside
beforeEach so only afterEach performs vi.resetAllMocks() and restores
global.fetch to originalFetch; update minimax-portal.test.ts by keeping the
afterEach block with global.fetch = originalFetch and vi.resetAllMocks(), and
remove the beforeEach vi.resetAllMocks() invocation.
- Around line 35-88: Add unit tests in minimax-portal.test.ts around
loginMiniMaxPortalDeviceCode to cover non-happy paths: create separate cases
that (1) return a different state from /oauth/code and assert the callback
rejects due to state mismatch, (2) return a non-ok or malformed response from
/oauth/code (missing user_code or verification_uri) and assert an error is
thrown, (3) return a non-ok HTTP or { status: 'error' } payload from
/oauth/token and assert callback rejects, and (4) simulate the polling loop
timing out (e.g., provide expired_in/interval such that callback fails) and
assert timeout behavior; implement each by altering the mockFetch behavior used
in the existing test to return the appropriate responses and assert thrown
errors or rejected promises from the flow.callback() call.
In `@packages/cli/src/cli/commands/connect/minimax-portal.ts`:
- Around line 14-18: Replace the manual toFormUrlEncoded implementation with
URLSearchParams: change the function to use new URLSearchParams(data).toString()
(keeping the same function name toFormUrlEncoded so callers don’t change) and
update any call sites if needed; this aligns with openai-codex.ts usage and
ensures correct encoding of special characters and less duplicated logic.
- Around line 27-34: The helper toExpiresAt is ambiguous about accepting seconds
vs epoch-ms; rename it to normalizeExpiryToEpochMs (or alternatively add a clear
JSDoc and inline comments at every call site) so callers know it accepts either
seconds or epoch-ms and returns epoch milliseconds; update the function name
(toExpiresAt -> normalizeExpiryToEpochMs), adjust all call sites in
minimax-portal.ts (and any other imports), and add a short JSDoc on the function
describing the 10_000_000_000 threshold and return value.
In `@packages/cli/src/cli/commands/connect/oauth-error.ts`:
- Around line 7-27: The helper toWhitelistedOauthErrorText currently skips
non-string fields because addField only accepts strings, causing numeric status
codes (e.g., status: 403) to be omitted; update the logic in
toWhitelistedOauthErrorText/addField so that for the 'status' (and optionally
'status_msg') keys you coerce number values to strings (trim and shorten as done
for strings) before pushing to parts, while preserving the existing behavior for
other keys to only accept strings.
In `@packages/cli/src/cli/commands/connect/openai-codex.ts`:
- Around line 128-132: The extra fixed delay of 3000ms added to each pending
poll (in the block that checks response.status === 403 || response.status ===
404) causes waits of pollIntervalMs + 3000; update the code to either remove or
make this offset configurable and add a brief comment explaining why any
additional backoff exists (e.g., to reduce tight retry loops and avoid rate
limits or account for server processing time). Locate the pending-poll handling
around response.status and pollIntervalMs and either (a) replace the hardcoded
+3000ms with a named constant or configuration (e.g., PENDING_POLL_BACKOFF_MS)
or (b) remove it and document that pollIntervalMs should be trusted; in all
cases add a one-line comment describing the rationale for the chosen approach.
In `@packages/cli/src/cli/ink-cli/utils/llm-provider-display.ts`:
- Line 21: Update the display name for the provider key 'moonshotai-cn' in
llm-provider-display.ts to remove the nested parentheses; change the string
value from "Moonshot AI (Kimi) (CN)" to "Moonshot AI - Kimi (CN)" so the model
picker shows a cleaner label for the moonshotai-cn provider.
In `@packages/core/src/llm/index.ts`:
- Line 7: Replace the wildcard export of auth/types with an explicit named
re-export: export the three known symbols LlmAuthResolver,
LlmRuntimeAuthOverrides, and ResolveLlmRuntimeAuthInput from './auth/types.js'
so the barrel surface remains intentional and cannot accidentally expand if
auth/types adds more exports later; locate the current export line that
re-exports from './auth/types.js' in packages/core/src/llm/index.ts and change
it to explicitly export those three symbols.
In `@packages/image-local/test/import.integration.test.ts`:
- Line 50: Add an inline comment next to the 60_000 test timeout in the test
declaration in import.integration.test.ts explaining why the 60s ceiling is
required (e.g., to allow slow TypeScript source resolution and transitive
imports during this integration test), so future readers know this is
intentional and can reassess if performance improves; update the comment wording
to mention TypeScript resolution and transitive import chain as the reason.
In `@vitest.integration.config.ts`:
- Around line 6-38: The alias map in vitest.integration.config.ts is duplicated
from vitest.config.ts; extract it into a shared constant (e.g.,
workspaceAliases) exported from a new vitest.aliases.ts and import it into both
configs so they reuse the same map. Create vitest.aliases.ts that builds and
exports workspaceAliases (using path.resolve for each package path), then
replace the inline alias object in vitest.integration.config.ts and
vitest.config.ts with an import of workspaceAliases and use that variable as the
alias field. Ensure the exported name matches the import in both configs
(workspaceAliases) and keep the '@dexto/core/utils/path.js' special key as-is.
| - **OpenClaw** (`~/Projects/external/openclaw`) — onboarding wizard + auth profiles + provider auth plugins. | ||
| - **OpenCode** (`~/Projects/external/opencode`) — `/connect` UX + models.dev-backed provider catalog + plugin auth methods. | ||
| - **pi-mono** (`~/Projects/external/pi-mono`) — model registry generation (multi-source) + explicit `api`/transport discriminators. | ||
| - **models.dev** (`~/Projects/external/models.dev`) — provider/model metadata source (and clarifies when catalogs are curated vs exhaustive). | ||
| - Provider coverage snapshot (exact provider IDs): [`PROVIDER_COVERAGE.md`](./PROVIDER_COVERAGE.md) | ||
| - Provider grouping by `provider.npm` (transport mapping roadmap): [`NPM_TRANSPORT_COVERAGE.md`](./NPM_TRANSPORT_COVERAGE.md) |
There was a problem hiding this comment.
Local filesystem paths are not portable for the rest of the team or AI agents.
All "Primary references" point to ~/Projects/external/... paths that exist only on the original author's machine. Throughout sections 4–7 there are 30+ additional references to the same local directories (openclaw, opencode, pi-mono, models.dev). Team members and automated agents consuming this plan cannot follow or verify any of these links.
Consider replacing them with GitHub permalinks, public repository URLs, or a brief note explaining how to obtain the reference material (e.g., clone instructions). As per coding guidelines, documentation should avoid exposing unnecessary internal complexity — machine-local paths are an extreme form of this. Based on learnings, this file is intentionally committed with git add -f, so it is expected to be read by the wider team.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@feature-plans/oauth-and-more-providers/PLAN.md` around lines 12 - 17, Replace
all machine-local paths like `~/Projects/external/openclaw`,
`~/Projects/external/opencode`, `~/Projects/external/pi-mono`, and
`~/Projects/external/models.dev` with portable references: use canonical public
repository URLs or GitHub permalinks to the exact commits/paths for those
projects, or add brief clone/checkout instructions and a NOTE describing
required access; also update links referenced by `PROVIDER_COVERAGE.md` and
`NPM_TRANSPORT_COVERAGE.md` to use the same public/permalink style so readers
and agents can follow them reliably.
| - Source: `~/Projects/external/opencode/packages/opencode/src/plugin/codex.ts` | ||
| - GitHub Copilot uses device-code and supports enterprise variant selection. | ||
| - Source: `~/Projects/external/opencode/packages/opencode/src/plugin/copilot.ts` | ||
| - Server-side two-phase OAuth orchestration (authorize → callback) and pending-state handling: |
There was a problem hiding this comment.
Fix inconsistent list indentation (MD005, flagged by markdownlint).
Line 181 opens a sub-bullet at 1 space where the parent is at 0 (expected 0 for a top-level continuation, or 2 for an explicit sub-level). Line 191 opens a sub-bullet at 3 spaces where the sibling items are at 2.
♻️ Proposed fix
- - Server-side two-phase OAuth orchestration (authorize → callback) and pending-state handling:
+- Server-side two-phase OAuth orchestration (authorize → callback) and pending-state handling:- - Key implementation details worth copying (see `CodexAuthPlugin()`):
+ - Key implementation details worth copying (see `CodexAuthPlugin()`):Also applies to: 191-191
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 181-181: Inconsistent indentation for list items at the same level
Expected: 0; Actual: 1
(MD005, list-indent)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@feature-plans/oauth-and-more-providers/PLAN.md` at line 181, Fix the Markdown
list indentation inconsistency: locate the bullet that begins "Server-side
two-phase OAuth orchestration (authorize → callback) and pending-state
handling:" and any sibling sub-bullets that currently use 1 or 3 leading spaces,
and normalize them so top-level bullets start with no leading space and any
explicit sub-bullets use exactly 2 spaces of indentation; update the sub-bullet
lines that are at 1 and 3 spaces to use 2 spaces to match the other siblings.
| Primary sources: | ||
| - Provider IDs that show up in the **auth/profile/env resolution** surface: `~/Projects/external/openclaw/src/agents/model-auth.ts` | ||
| - Provider IDs that are **auto-added** into `models.json` when credentials exist: `~/Projects/external/openclaw/src/agents/models-config.providers.ts` (`resolveImplicitProviders()`) | ||
| - Provider normalization (aliases like `opencode-zen` → `opencode`, `qwen` → `qwen-portal`): `~/Projects/external/openclaw/src/agents/model-selection.ts` (`normalizeProviderId()`) | ||
|
|
There was a problem hiding this comment.
Replace developer-local paths with repo-relative references.
The OpenClaw/OpenCode source lists include machine-specific paths (e.g., ~/Projects/...), which aren’t meaningful to readers and add unnecessary internal detail. Prefer repo-relative paths or a brief file list so the snapshot stays portable and user-focused.
As per coding guidelines: Keep documentation user-focused; avoid exposing unnecessary internal complexity.
Also applies to: 112-117
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@feature-plans/oauth-and-more-providers/PROVIDER_COVERAGE.md` around lines 55
- 59, In PROVIDER_COVERAGE.md remove developer-local/home-prefixed paths and
replace them with repo-relative references or plain filenames; e.g., change
"~/Projects/external/openclaw/src/agents/model-auth.ts" to
"src/agents/model-auth.ts" (or simply "src/agents/model-auth.ts —
auth/profile/env resolution"), and do the same for "models-config.providers.ts
(resolveImplicitProviders())" and "model-selection.ts (normalizeProviderId())";
also scan the rest of the file for other occurrences of machine-specific paths
and update them to repo-relative paths or concise file+function descriptions to
keep the doc portable and user-focused.
| } catch { | ||
| return createEmptyStore(); | ||
| } |
There was a problem hiding this comment.
Sync loader swallows all filesystem errors, masking real failures.
The async version (line 220) correctly rethrows non-ENOENT errors, but the sync version catches everything and silently returns an empty store. A permission error or disk I/O failure would be indistinguishable from a missing file — and a subsequent write could overwrite a valid store.
Proposed fix: mirror the async version's error handling
- } catch {
- return createEmptyStore();
+ } catch (error) {
+ if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
+ return createEmptyStore();
+ }
+ throw error;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch { | |
| return createEmptyStore(); | |
| } | |
| } catch (error) { | |
| if ((error as NodeJS.ErrnoException).code === 'ENOENT') { | |
| return createEmptyStore(); | |
| } | |
| throw error; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/agent-management/src/auth/llm-profiles.ts` around lines 249 - 251,
The sync loader currently swallows all errors and always returns
createEmptyStore(); update its catch to mirror the async loader: inspect the
thrown error (capture it as err), if err.code === 'ENOENT' return
createEmptyStore(), otherwise rethrow the error so permission/IO errors are not
masked; apply this change in the sync loader function (the counterpart to the
async loader around line 220) and keep createEmptyStore() behavior only for
missing-file ENOENT cases.
| 'minimax-cn': ['MINIMAX_API_KEY'], | ||
| 'minimax-coding-plan': ['MINIMAX_API_KEY'], | ||
| 'minimax-cn-coding-plan': ['MINIMAX_API_KEY'], | ||
| glm: ['ZHIPU_API_KEY', 'ZHIPUAI_API_KEY'], | ||
| zhipuai: ['ZHIPU_API_KEY', 'ZHIPUAI_API_KEY'], | ||
| 'zhipuai-coding-plan': ['ZHIPU_API_KEY', 'ZHIPUAI_API_KEY'], | ||
| zai: ['ZHIPU_API_KEY'], | ||
| 'zai-coding-plan': ['ZHIPU_API_KEY'], | ||
| moonshotai: ['MOONSHOT_API_KEY'], | ||
| 'moonshotai-cn': ['MOONSHOT_API_KEY'], | ||
| 'kimi-for-coding': ['KIMI_API_KEY'], |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find LLMProvider definition in `@dexto/core`
echo "=== Searching for LLMProvider definition in packages/core/src ==="
rg -n "type\s+LLMProvider\s*=|export\s+type\s+LLMProvider" packages/core/src -A 15 -B 1
echo ""
echo "=== Alternative: Look for LLMProvider in any file ==="
fd -e ts -e tsx packages/core/src | xargs rg -l "LLMProvider" | head -5
echo ""
echo "=== Check the actual provider enum/union ==="
rg "LLMProvider" packages/core/src -A 20 -B 2 | head -100Repository: truffle-ai/dexto
Length of output: 9441
🏁 Script executed:
#!/bin/bash
# Verify specific provider strings exist in LLMProvider
NEW_PROVIDERS=(
"minimax-cn"
"minimax-coding-plan"
"minimax-cn-coding-plan"
"zhipuai"
"zhipuai-coding-plan"
"zai"
"zai-coding-plan"
"moonshotai"
"moonshotai-cn"
"kimi-for-coding"
)
echo "=== Checking if providers exist in packages/core ==="
for provider in "${NEW_PROVIDERS[@]}"; do
result=$(rg "\"$provider\"|'$provider'|$provider\s*:" packages/core/src --type ts 2>/dev/null | head -1)
if [ -z "$result" ]; then
echo "❌ $provider - NOT FOUND"
else
echo "✓ $provider - FOUND"
fi
done
echo ""
echo "=== Full LLMProvider type/enum content ==="
rg -A 50 "type LLMProvider|enum LLMProvider|export.*LLMProvider" packages/core/src --type ts | grep -v "^--$"Repository: truffle-ai/dexto
Length of output: 50372
🏁 Script executed:
#!/bin/bash
# Look for any comments or documentation about zai vs zhipuai key differences
echo "=== Searching for context about zai/zhipuai keys ==="
rg "zai|ZHIPUAI_API_KEY|ZHIPU_API_KEY" packages -B 2 -A 2 --type ts | grep -v "test\|spec" | head -100Repository: truffle-ai/dexto
Length of output: 7054
🏁 Script executed:
# Quick confirmation: checking the LLM_PROVIDERS array definition
rg "LLM_PROVIDERS.*=.*as const" packages/core/src/llm/types.ts -A 30Repository: truffle-ai/dexto
Length of output: 42
Add ZHIPUAI_API_KEY fallback to zai / zai-coding-plan API key resolution.
The zai and zai-coding-plan entries resolve only ZHIPU_API_KEY, while the logically related zhipuai, zhipuai-coding-plan, and glm entries all fall back to both ZHIPU_API_KEY and ZHIPUAI_API_KEY. A user with only ZHIPUAI_API_KEY set will successfully authenticate with zhipuai but silently fail with zai.
Proposed fix
- zai: ['ZHIPU_API_KEY'],
- 'zai-coding-plan': ['ZHIPU_API_KEY'],
+ zai: ['ZHIPU_API_KEY', 'ZHIPUAI_API_KEY'],
+ 'zai-coding-plan': ['ZHIPU_API_KEY', 'ZHIPUAI_API_KEY'],All new provider literals are correctly present in the LLMProvider union type; no type issues detected.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/agent-management/src/utils/api-key-resolver.ts` around lines 18 -
28, The zai and zai-coding-plan entries in the API key mapping only list
'ZHIPU_API_KEY' so they fail when only 'ZHIPUAI_API_KEY' is present; update the
mapping object in api-key-resolver (the provider -> env keys map) to add
'ZHIPUAI_API_KEY' as a fallback for the 'zai' and 'zai-coding-plan' keys (make
them mirror the arrays used by 'zhipuai'/'zhipuai-coding-plan' and 'glm').
Locate the mapping variable (the provider map object) and change the arrays for
'zai' and 'zai-coding-plan' to ['ZHIPU_API_KEY', 'ZHIPUAI_API_KEY'].
| const deviceData = (await deviceResponse.json()) as { | ||
| device_auth_id: string; | ||
| user_code: string; | ||
| interval?: string; | ||
| }; | ||
|
|
||
| const intervalSec = Math.max(Number.parseInt(deviceData.interval ?? '5', 10) || 5, 1); | ||
| const pollIntervalMs = intervalSec * 1000; |
There was a problem hiding this comment.
Device code response is not validated — missing fields will silently propagate.
The response is cast with as but device_auth_id and user_code are not checked before use. If the server returns an unexpected shape, deviceData.device_auth_id could be undefined, leading to the poll request sending "undefined" in the body and eventually timing out with a misleading error.
🛡️ Proposed fix: validate required fields
const deviceData = (await deviceResponse.json()) as {
device_auth_id: string;
user_code: string;
interval?: string;
};
+ if (!deviceData.device_auth_id || !deviceData.user_code) {
+ throw new Error('OpenAI device auth returned incomplete response (missing device_auth_id or user_code)');
+ }
+
const intervalSec = Math.max(Number.parseInt(deviceData.interval ?? '5', 10) || 5, 1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/cli/commands/connect/openai-codex.ts` around lines 87 - 94,
The code casts deviceResponse.json() to deviceData but doesn't validate required
fields: explicitly check that deviceData.device_auth_id and deviceData.user_code
are present and non-empty before using them (and treat interval via
Number.parseInt safely); if either is missing, log/throw a clear error and stop
the flow instead of proceeding to compute intervalSec/pollIntervalMs and sending
a poll request with "undefined". Update the deviceData handling around the
deviceResponse.json() call and the subsequent intervalSec/pollIntervalMs logic
to validate types/values and fail fast with a descriptive message when required
fields are absent.
| function isLLMProvider(value: unknown): value is LLMProvider { | ||
| return typeof value === 'string' && (LLM_PROVIDERS as readonly string[]).includes(value); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
# Find all isLLMProvider definitions
rg -n "function isLLMProvider|isLLMProvider\s*=|export.*isLLMProvider" --type=tsRepository: truffle-ai/dexto
Length of output: 278
🏁 Script executed:
# Get context around isLLMProvider in init-app.ts (lines 20-30)
sed -n '20,30p' packages/cli/src/cli/commands/init-app.ts | cat -nRepository: truffle-ai/dexto
Length of output: 403
🏁 Script executed:
# Get context around isLLMProvider in provider-setup.ts (lines 310-320)
sed -n '310,320p' packages/cli/src/cli/utils/provider-setup.ts | cat -nRepository: truffle-ai/dexto
Length of output: 332
🏁 Script executed:
# Check if init-app.ts imports from provider-setup.ts
rg -n "from.*provider-setup" packages/cli/src/cli/commands/init-app.tsRepository: truffle-ai/dexto
Length of output: 99
🏁 Script executed:
# Check all imports in init-app.ts
head -30 packages/cli/src/cli/commands/init-app.ts | cat -nRepository: truffle-ai/dexto
Length of output: 1383
Export isLLMProvider from provider-setup.ts to eliminate duplication.
The same type guard is defined in both packages/cli/src/cli/utils/provider-setup.ts (line 314) and here. Since init-app.ts already imports from provider-setup.ts, consolidate the more defensively typed version (unknown parameter with typeof check) into provider-setup.ts, export it, and import it here instead.
Proposed fix
In packages/cli/src/cli/utils/provider-setup.ts (line 314):
-function isLLMProvider(value: string): value is LLMProvider {
+export function isLLMProvider(value: unknown): value is LLMProvider {
+ return typeof value === 'string' && (LLM_PROVIDERS as readonly string[]).includes(value);
- return (LLM_PROVIDERS as readonly string[]).includes(value);
}In packages/cli/src/cli/commands/init-app.ts (lines 23–25):
Remove the local definition and update the import:
-function isLLMProvider(value: unknown): value is LLMProvider {
- return typeof value === 'string' && (LLM_PROVIDERS as readonly string[]).includes(value);
-} import {
getProviderDisplayName,
+ isLLMProvider,
isValidApiKeyFormat,
PROVIDER_OPTIONS,
} from '../utils/provider-setup.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/cli/commands/init-app.ts` around lines 23 - 25, The
duplicate type-guard isLLMProvider (takes unknown and checks typeof + includes
against LLM_PROVIDERS) in init-app.ts should be removed and the canonical,
defensively-typed definition exported from provider-setup.ts; update
provider-setup.ts to export its isLLMProvider, remove the local function in
packages/cli/src/cli/commands/init-app.ts, and import the exported isLLMProvider
from packages/cli/src/cli/utils/provider-setup.ts where referenced so the code
uses the single shared implementation.
| const answers = await p.group<{ | ||
| llmProvider: string | symbol; | ||
| llmApiKey: string | symbol; | ||
| directory: string | symbol; | ||
| createExampleFile: boolean | symbol; | ||
| }>( |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the init-app.ts file at the specified lines
fd "init-app.ts" packages/cli --type=f | head -1 | xargs cat -n | sed -n '30,50p'Repository: truffle-ai/dexto
Length of output: 888
🏁 Script executed:
# Check the full context around lines 112-121
fd "init-app.ts" packages/cli --type=f | head -1 | xargs cat -n | sed -n '105,130p'Repository: truffle-ai/dexto
Length of output: 1027
🏁 Script executed:
# Check `@clack/prompts` package.json version and types
fd "package.json" packages/cli --max-depth 2 | head -1 | xargs cat | grep -A2 -B2 "@clack/prompts"Repository: truffle-ai/dexto
Length of output: 205
🏁 Script executed:
# Check imports in init-app.ts to understand how p.group is used
fd "init-app.ts" packages/cli --type=f | head -1 | xargs head -20Repository: truffle-ai/dexto
Length of output: 912
🏁 Script executed:
# Check if there's a TypeScript compilation error or type mismatch
# Let's look at the complete p.group call to understand the structure
fd "init-app.ts" packages/cli --type=f | head -1 | xargs cat -n | sed -n '37,110p'Repository: truffle-ai/dexto
Length of output: 3447
🏁 Script executed:
# Check `@clack/prompts` types - look for node_modules or type definitions
find packages/cli/node_modules/@clack 2>/dev/null | head -20 || echo "node_modules not available, checking package source"Repository: truffle-ai/dexto
Length of output: 42
🌐 Web query:
@clack/prompts version 0.10.1 TypeScript types Promise return symbol
💡 Result:
In @clack/prompts@0.10.1, the prompt APIs are typed to return a cancellation sentinel as a symbol, so TypeScript shows unions like:
text(opts): Promise<string | symbol>confirm(opts): Promise<boolean | symbol>select(opts): Promise<Value | symbol>multiselect(opts): Promise<Value[] | symbol>(app.unpkg.com)
That symbol is the value you get when the user cancels (e.g. Ctrl+C / Esc). You’re expected to narrow it with isCancel (re-exported from @clack/core) before using the value: (app.unpkg.com)
import { text, isCancel, cancel } from "@clack/prompts";
const name = await text({ message: "Name?" });
if (isCancel(name)) {
cancel("Operation cancelled.");
process.exit(0);
}
// here: name is stringNote: the group() helper’s types explicitly exclude the cancel symbol from the final results object (it uses Exclude<..., symbol> internally), but you still handle cancellation via onCancel. (app.unpkg.com)
Citations:
- 1: https://app.unpkg.com/%40clack/prompts%400.10.1/files/dist/index.d.ts
- 2: https://app.unpkg.com/%40clack/prompts%400.10.1/files/dist/index.d.ts
- 3: https://app.unpkg.com/%40clack/prompts%400.10.1/files/dist/index.d.ts
🏁 Script executed:
# Check if there's a tsconfig in the CLI package and try to understand compilation
fd "tsconfig" packages/cli --max-depth 2 | head -5Repository: truffle-ai/dexto
Length of output: 123
🏁 Script executed:
# Look at the isLLMProvider check - understand what it does
fd "init-app.ts" packages/cli --type=f | head -1 | xargs rg "isLLMProvider" -A3 -B3Repository: truffle-ai/dexto
Length of output: 713
🏁 Script executed:
# Verify if the explicit generic parameter is the root cause
# Let's check what happens if we remove it by looking at p.group signature
# and understand if explicit parameter overrides the Exclude behavior
# Also check if there are other similar uses of p.group in the codebase
rg "p\.group\s*<" packages/cli --type=tsRepository: truffle-ai/dexto
Length of output: 137
🏁 Script executed:
# Check if there are type errors currently in the build
fd "tsconfig.typecheck.json" packages/cli | head -1 | xargs catRepository: truffle-ai/dexto
Length of output: 133
🏁 Script executed:
# Let's understand the actual behavior by creating a minimal test
# Check what the inferred type would be if we didn't provide the explicit generic
cat > /tmp/test-group.ts << 'EOF'
// Simulating `@clack/prompts` behavior
type CancelSymbol = symbol;
// This is how p.group is actually typed - it excludes symbol from the return
async function group<T extends Record<string, () => Promise<any>>>(
prompts: T,
options?: { onCancel?: () => void }
): Promise<{
[K in keyof T]: Exclude<Awaited<ReturnType<T[K]>>, CancelSymbol>
}> {
// implementation
return {} as any;
}
// Case 1: With explicit generic including symbol (current code)
const test1 = group<{
field1: string | symbol;
field2: boolean | symbol;
}>({
field1: async () => "test",
field2: async () => true,
});
// Case 2: Without explicit generic (inference)
const test2 = group({
field1: async () => "test" as string | symbol,
field2: async () => true as boolean | symbol,
});
EOF
cat /tmp/test-group.tsRepository: truffle-ai/dexto
Length of output: 856
🏁 Script executed:
# Check if there's a type error by examining how the return is actually used
# The key: does answers.llmApiKey satisfy the string return type?
fd "init-app.ts" packages/cli --type=f | head -1 | xargs cat -n | sed -n '116,122p'Repository: truffle-ai/dexto
Length of output: 302
Remove the explicit generic parameter from p.group() to let it properly apply automatic type narrowing.
The p.group() helper automatically excludes the cancellation symbol from its return type via Exclude<..., symbol>. By explicitly providing the generic parameter with symbol unions (string | symbol, boolean | symbol), you prevent TypeScript from applying this automatic narrowing, causing the type mismatch between answers fields (which appear as string | symbol) and the return type signature expecting string and boolean.
Remove p.group<{ llmProvider: string | symbol; ... }> and let the function infer the correct types from the prompt functions' return types. The onCancel callback already handles cancellation at runtime.
Alternatively, if an explicit generic is necessary, exclude symbol from the unions: p.group<{ llmProvider: string; llmApiKey: string; ... }>.
The proposed type assertions work but mask the underlying type system issue.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/cli/commands/init-app.ts` around lines 37 - 42, The explicit
generic on p.group is forcing fields like
llmProvider/llmApiKey/createExampleFile to include symbol unions and prevents
the helper from auto-excluding the cancellation symbol; remove the explicit
generic parameter on the p.group(...) call so TypeScript can infer the correct
types (or if you must keep a generic, use types without symbol like llmProvider:
string, createExampleFile: boolean) and rely on the existing onCancel handler
for cancellation; update any code that assumed the widened symbol unions (e.g.,
the answers variable) to use the inferred types returned by p.group.
| zhipuai: ['ZHIPU_API_KEY', 'ZHIPUAI_API_KEY'], | ||
| 'zhipuai-coding-plan': ['ZHIPU_API_KEY', 'ZHIPUAI_API_KEY'], | ||
| zai: ['ZHIPU_API_KEY'], | ||
| 'zai-coding-plan': ['ZHIPU_API_KEY'], |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/core/src/utils/api-key-resolver.tsRepository: truffle-ai/dexto
Length of output: 4001
🏁 Script executed:
# Search for zai and zhipuai provider definitions and documentation
rg -n "zai" --type=ts -g '!node_modules/**' -B2 -A2 | head -80Repository: truffle-ai/dexto
Length of output: 5938
🏁 Script executed:
# Search for more context on zai provider
rg -n "Z\.AI" --type=ts --type=tsx -B2 -A2 | head -50Repository: truffle-ai/dexto
Length of output: 87
🏁 Script executed:
# Search for more context on zai provider
rg -n "Z\.AI|zai" packages/agent-management/src/auth/connect-catalog.ts -B3 -A3Repository: truffle-ai/dexto
Length of output: 585
🏁 Script executed:
# Search for any documentation or README about providers
fd -e md -e txt | xargs grep -l "zai\|Z\.AI" 2>/dev/null | head -10Repository: truffle-ai/dexto
Length of output: 42
🏁 Script executed:
# Search for any comments explaining the zai provider difference
rg -n "zai|zhipuai" packages/core/src/llm/types.ts -B2 -A2 | head -80Repository: truffle-ai/dexto
Length of output: 239
🏁 Script executed:
# Get the full LLMProvider type definition
cat -n packages/core/src/llm/types.ts | head -50Repository: truffle-ai/dexto
Length of output: 1746
🏁 Script executed:
# Search for any PR or comments discussing zai specifically
git log --all --oneline --grep="zai\|Z\.AI" | head -20Repository: truffle-ai/dexto
Length of output: 42
🏁 Script executed:
# Search git history for zai addition
git log --all -p --follow -S "zai" -- packages/core/src/utils/api-key-resolver.ts | head -100Repository: truffle-ai/dexto
Length of output: 3876
🏁 Script executed:
# Check agent-management version to see if the same pattern exists there
git log --all -p --follow -S "zai" -- packages/agent-management/src/utils/api-key-resolver.ts | head -100Repository: truffle-ai/dexto
Length of output: 3934
🏁 Script executed:
# Search for any references to Z.AI API or whether it differs from zhipuai in terms of authentication
rg -n "ZHIPUAI_API_KEY" --type=ts -B2 -A2 | grep -E "(zai|zhipuai)" -B2 -A2 | head -60Repository: truffle-ai/dexto
Length of output: 1373
🏁 Script executed:
# Check if there are any tests that clarify the intended behavior
find . -name "*.test.ts" -o -name "*.spec.ts" | xargs grep -l "zai\|api-key-resolver" 2>/dev/null | head -5Repository: truffle-ai/dexto
Length of output: 119
🏁 Script executed:
# Check the test file for any relevant context about zai
rg -n "zai" packages/core/src/llm/services/vercel.integration.test.ts -B3 -A3Repository: truffle-ai/dexto
Length of output: 42
🏁 Script executed:
# Search for any recent commits that added zai provider
git log --all --oneline --since="2026-01-01" | grep -i "zai\|zhipu" | head -20Repository: truffle-ai/dexto
Length of output: 42
Add a clarifying comment explaining why zai only uses ZHIPU_API_KEY without the ZHIPUAI_API_KEY fallback.
zai and zhipuai use the same underlying model (glm-4.7) and model matching logic, yet their API key mappings differ: zhipuai tries both ZHIPU_API_KEY and ZHIPUAI_API_KEY, while zai only tries ZHIPU_API_KEY. If this difference is intentional (e.g., due to API capabilities), add a brief comment explaining it for future maintainers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/utils/api-key-resolver.ts` around lines 22 - 25, Add a
brief clarifying comment next to the API key mapping entries for the providers
'zai' and 'zhipuai' explaining why 'zai' only lists 'ZHIPU_API_KEY' while
'zhipuai' also includes 'ZHIPUAI_API_KEY'; update the mapping block containing
the symbols 'zai', 'zhipuai', 'zhipuai-coding-plan', 'zai-coding-plan' to note
the intentional difference (e.g., that 'zai' only accepts the legacy
ZHIPU_API_KEY or that ZHIPUAI_API_KEY is not applicable for that provider) so
future maintainers understand the rationale.
| const setDefaultRoute = createRoute({ | ||
| method: 'post', | ||
| path: '/llm/connect/defaults', | ||
| summary: 'Set Default Profile', | ||
| description: 'Sets the default profile for a provider (or clears it).', | ||
| tags: ['llm'], | ||
| request: { body: { content: { 'application/json': { schema: SetDefaultBodySchema } } } }, | ||
| responses: { | ||
| 200: { | ||
| description: 'Default updated', | ||
| content: { | ||
| 'application/json': { | ||
| schema: z | ||
| .object({ ok: z.literal(true).describe('Request succeeded') }) | ||
| .strict() | ||
| .describe('Set default profile response'), | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the full route implementation including the handler
sed -n '100,180p' packages/server/src/hono/routes/llm-connect.tsRepository: truffle-ai/dexto
Length of output: 2717
🏁 Script executed:
# Check the standard error response format and helpers
head -100 packages/server/src/hono/middleware/error.tsRepository: truffle-ai/dexto
Length of output: 3300
🏁 Script executed:
# Look for setDefaultLlmAuthProfile function to understand what errors it throws
rg -n "setDefaultLlmAuthProfile" --type ts -B 2 -A 10Repository: truffle-ai/dexto
Length of output: 21461
🏁 Script executed:
# Check existing error response patterns in other routes
rg -n "responses:.*\(400|404|409\)" packages/server/src/hono/routes -A 3 --type ts | head -50Repository: truffle-ai/dexto
Length of output: 3451
🏁 Script executed:
# Get the full implementation of setDefaultLlmAuthProfile
sed -n '324,360p' packages/agent-management/src/auth/llm-profiles.tsRepository: truffle-ai/dexto
Length of output: 1398
🏁 Script executed:
# Check if it imports/uses DextoRuntimeError or DextoValidationError
rg -n "DextoRuntimeError|DextoValidationError" packages/agent-management/src/auth/llm-profiles.tsRepository: truffle-ai/dexto
Length of output: 42
Map set-default failures to standard error responses (avoid opaque 500s).
setDefaultLlmAuthProfile throws plain Error instances for missing profiles and provider mismatches, but this route currently has no error handling, causing these to bubble into a generic 500. The OpenAPI spec only documents the 200 response.
Catch these errors in the handler, throw DextoRuntimeError with appropriate types (ErrorType.NOT_FOUND for missing profile, ErrorType.CONFLICT for mismatch), and document the 404 and 409 responses in the route schema with the standard error structure:
{
code: string,
message: string,
scope: string,
type: ErrorType,
severity: 'error',
endpoint: string,
method: string
}
See mapErrorTypeToStatus() in packages/server/src/hono/middleware/error.ts for the error-to-status mapping.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/hono/routes/llm-connect.ts` around lines 112 - 132, The
route setDefaultRoute currently lacks error handling so errors thrown by
setDefaultLlmAuthProfile bubble up as 500s; update the route handler to catch
errors from setDefaultLlmAuthProfile, inspect the error message/type and rethrow
a DextoRuntimeError with ErrorType.NOT_FOUND for missing profiles and
ErrorType.CONFLICT for provider mismatches (so mapErrorTypeToStatus will return
404/409), and ensure the thrown DextoRuntimeError includes the standard error
fields (code, message, scope, type, severity, endpoint, method). Also update the
setDefaultRoute OpenAPI responses to include 404 and 409 entries using the
shared standard error schema (code:string, message:string, scope:string,
type:ErrorType, severity:'error', endpoint:string, method:string) so clients
receive structured errors instead of opaque 500s.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
packages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsx (1)
241-263:⚠️ Potential issue | 🟠 MajorGuard
providerModelsagainstundefinedbefore iterating — newly unblocked providers may not have a populated entry inallModelsBefore this change,
amazon-bedrockandgoogle-vertexwere never allowed to reach thefor (const model of providerModels)loop. Now that they pass the skip filters, ifgetSupportedModels()returns a partial record that lacks an entry for either provider (e.g., during a registry transition or when the provider rename hasn't fully propagated),providerModelswill beundefinedand thefor…ofloop will throw aTypeErrorat runtime, breaking the entire model-loading flow.🛡️ Proposed defensive guard
const providerModels = allModels[provider]; + if (!providerModels) continue; for (const model of providerModels) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsx` around lines 241 - 263, The loop assumes providerModels = allModels[provider] is defined; add a defensive guard before the iteration (e.g., check providerModels !== undefined && providerModels.length > 0) and skip continuing when it's missing so the for (const model of providerModels) loop in ModelSelectorRefactored.tsx won't throw; specifically protect the providerModels lookup for providers like "amazon-bedrock" and "google-vertex" (and any newly unblocked provider) after the existing skip/continue checks, referencing the allModels map and the getSupportedModels() data source.packages/core/src/llm/executor/provider-options.ts (1)
64-70:⚠️ Potential issue | 🔴 CriticalChange providerOptions key from
'vertex-anthropic'to'anthropic'.Per the official
@ai-sdk/google-vertexdocumentation, the Anthropic provider options use the key'anthropic', not'vertex-anthropic'. The correct structure is:providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' }, sendReasoning: true, }, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/llm/executor/provider-options.ts` around lines 64 - 70, The returned providerOptions object for the branch where provider === 'google-vertex-anthropic' uses the wrong key; change the top-level key from 'vertex-anthropic' to 'anthropic' in the object literal returned by that branch so the providerOptions shape matches `@ai-sdk/google-vertex` expectations (i.e., the object returned by the provider === 'google-vertex-anthropic' branch should have an anthropic property containing cacheControl and sendReasoning).packages/core/src/llm/services/factory.ts (1)
384-395:⚠️ Potential issue | 🟠 MajorConsolidate optional parameters into a single options object.
The function currently has two optional positional parameters (
compactionStrategyandauthResolver). Per the coding guideline to avoid optional parameters and prefer a single options object with defaults, refactor as suggested.Note: The JSDoc at line 381 references
@param compactionConfigbut the actual parameter iscompactionStrategy—update the documentation to match.♻️ Suggested refactor
export function createLLMService( config: ValidatedLLMConfig, toolManager: ToolManager, systemPromptManager: SystemPromptManager, historyProvider: ConversationHistoryProvider, sessionEventBus: SessionEventBus, sessionId: string, resourceManager: import('../../resources/index.js').ResourceManager, logger: Logger, - compactionStrategy?: import('../../context/compaction/types.js').CompactionStrategy | null, - authResolver?: LlmAuthResolver | null + options: { + compactionStrategy?: import('../../context/compaction/types.js').CompactionStrategy | null; + authResolver?: LlmAuthResolver | null; + } = {} ): VercelLLMService { + const { compactionStrategy = null, authResolver = null } = options; - const model = createVercelModel(config, { sessionId, authResolver: authResolver ?? null }); + const model = createVercelModel(config, { sessionId, authResolver });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/llm/services/factory.ts` around lines 384 - 395, Refactor createLLMService to replace the two optional positional params compactionStrategy and authResolver with a single optional options object (e.g., options?: { compactionStrategy?: CompactionStrategy | null; authResolver?: LlmAuthResolver | null }) and update internal usage to read from options.compactionStrategy and options.authResolver with sensible defaults; update the JSDoc to rename `@param` compactionConfig to `@param` options and document the two fields, adjust the function signature and any callers accordingly to pass an options object instead of positional args, and ensure the exported return type and other required imports (CompactionStrategy, LlmAuthResolver) remain intact.scripts/sync-llm-models.ts (1)
311-314:⚠️ Potential issue | 🟡 MinorMisleading error message references
dexto-novafor a generic function.The error at line 313 says
"needed for dexto-nova provider '${provider}'", but this function is called for every provider in the loop at lines 385–404, not just dexto-nova. This is likely a copy-paste remnant.📝 Proposed fix
throw new Error( - `models.dev provider '${modelsDevProviderId}' not found (needed for dexto-nova provider '${provider}')` + `models.dev provider '${modelsDevProviderId}' not found (needed for provider '${provider}')` );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/sync-llm-models.ts` around lines 311 - 314, The error message thrown when modelsDevProvider is missing is misleadingly hard-coded to "dexto-nova"; update the throw in the block that checks modelsDevProvider to use a generic message referencing modelsDevProviderId and the current provider variable (e.g., "models.dev provider '${modelsDevProviderId}' not found (needed for provider '${provider}')") so it correctly reflects any provider passed into this function/loop where modelsDevProvider, modelsDevProviderId and provider are used.
♻️ Duplicate comments (4)
packages/cli/src/cli/commands/connect/index.ts (3)
42-45: Duplicate: makeoptionsa required, defaulted parameter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/commands/connect/index.ts` around lines 42 - 45, The function handleConnectCommand currently declares options as optional; make it a required parameter with a default value to avoid repeated undefined checks: change the signature to accept options: { interactive?: boolean } = {} (i.e., remove the optional ? from the parameter itself and provide an empty-object default), then update the body to use options.interactive directly (keep the existing conditional for options.interactive === false) and adjust any callers to stop passing undefined where applicable; reference handleConnectCommand so you can locate and update all usages and the parameter definition accordingly.
69-71: Duplicate: avoid non-null assertion onprovider.methods[0].🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/commands/connect/index.ts` around lines 69 - 71, The code uses a non-null assertion on provider.methods[0] when computing methodId; remove the '!' and handle the potential undefined safely by either using optional chaining (provider.methods[0]?.id) with a clear fallback or by narrowing first (e.g., const first = provider.methods[0]; if (!first) throw/new error or handle) before reading .id; update the expression that assigns methodId (reference: methodId and provider.methods) so TypeScript is satisfied without the non-null assertion and you explicitly handle the zero-length case.
377-381: Duplicate: guardopen()in guided setup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/commands/connect/index.ts` around lines 377 - 381, There's a duplicated guard around calling open() in the guided setup: locate the conditional that checks p.isCancel(shouldOpenDocs) && shouldOpenDocs and the nested check for provider.modelsDevProviderId that calls open(`https://models.dev/${provider.modelsDevProviderId}`), then remove or consolidate the duplicate conditional so open() is only invoked once; keep the necessary truthiness checks (preserve p.isCancel/shouldOpenDocs and provider.modelsDevProviderId checks) and ensure only a single code path calls open().packages/agent-management/src/utils/api-key-resolver.ts (1)
1-5: Clean consolidation to re-exports from@dexto/core.Eliminating the local
PROVIDER_API_KEY_MAPduplication in favor of re-exporting from core is the right move — single source of truth for provider-to-env-var mappings.Note: Previous review comments about missing
ZHIPUAI_API_KEYfallback forzai/zai-coding-planandMOONSHOT_API_KEYfallback forkimi-for-codingnow apply to the core implementation atpackages/core/src/utils/api-key-resolver.ts(not provided in this review).#!/bin/bash # Verify the env var mappings in core for zai and kimi-for-coding providers echo "=== zai env var mappings in core ===" rg -n "zai" packages/core/src/utils/api-key-resolver.ts -B 1 -A 2 echo "" echo "=== kimi env var mappings in core ===" rg -n "kimi" packages/core/src/utils/api-key-resolver.ts -B 1 -A 2 echo "" echo "=== Full provider key map in core ===" rg -n "PROVIDER_API_KEY_MAP|getApiKeyEnvVars" packages/core/src/utils/api-key-resolver.ts -A 3 | head -60🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/agent-management/src/utils/api-key-resolver.ts` around lines 1 - 5, The core provider-to-env-var mapping is missing fallback env vars for certain providers; update the PROVIDER_API_KEY_MAP in packages/core's api-key-resolver so that 'zai' and 'zai-coding-plan' include 'ZHIPUAI_API_KEY' as a fallback and 'kimi-for-coding' includes 'MOONSHOT_API_KEY' as a fallback; ensure these keys are returned by getApiKeyEnvVarsForProvider and thus picked up by getPrimaryApiKeyEnvVar/resolveApiKeyForProvider so the re-exports in api-key-resolver.ts (getApiKeyEnvVarsForProvider, getPrimaryApiKeyEnvVar, resolveApiKeyForProvider) will surface the correct env var fallbacks.
🧹 Nitpick comments (7)
packages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsx (1)
810-810:hasCustomModelsis computed but never consumed — remove the dead variable
hasCustomModelsis derived at line 810 but has no reference anywhere in the JSX or downstream logic (lines 816–976).♻️ Remove dead variable
- const hasCustomModels = customModels.length > 0; const hasActionableItems = filteredItems.some((item) => !isAddCustomOption(item));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsx` at line 810, The variable hasCustomModels is computed from customModels but never used; remove the dead variable declaration (hasCustomModels) from ModelSelectorRefactored.tsx so there is no unused constant. Locate the const hasCustomModels = customModels.length > 0; line and delete it, leaving any logic that uses customModels directly intact.packages/core/src/llm/registry/sync.ts (3)
288-295: ExtractincludeModelto module scope; tighten the OpenAIo-prefix filter.
includeModelcaptures no closed-over state but is recreated on everybuildModelsByProviderFromParsedSourcescall. Moving it to module scope signals intent and avoids redundant allocation.modelId.startsWith('o')for OpenAI is overly broad — it would matchomni-moderation-*or any future OpenAI model whose ID starts withobut isn't in the reasoning series. A tighter prefix likestartsWith('o1') || startsWith('o3') || startsWith('o4')is more explicit, though it would require updating for each new series. If forward-compatibility is preferred, this deserves a comment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/llm/registry/sync.ts` around lines 288 - 295, The includeModel function should be moved out of buildModelsByProviderFromParsedSources to module scope since it captures no closed-over state; relocate the function declaration (includeModel) to top-level in the module and update all references. Also tighten the OpenAI filter: replace the broad modelId.startsWith('o') check inside includeModel with explicit allowed prefixes for OpenAI reasoning series (e.g., modelId.startsWith('o1') || modelId.startsWith('o3') || modelId.startsWith('o4')) or otherwise document the chosen forward-compatibility strategy in a comment so the intent is clear.
297-301: Unsafeas LLMProvidercast — unrecognised models.dev keys create invalid registry entries silently.
Object.keys(modelsDevApi)isstring[]. Casting each element toLLMProviderviaasbypasses the type system: if models.dev introduces a provider key that isn't inLLM_PROVIDERS, the resulting map will have a key that violates the declared return typeRecord<LLMProvider, ModelInfo[]>with no compile-time or runtime diagnostic.Consider filtering against the known set of providers or using a type guard before including a key:
♻️ Proposed fix
// Example type-guard approach (requires access to LLM_PROVIDERS array) import { LLM_PROVIDERS } from '../types.js'; const isKnownProvider = (id: string): id is LLMProvider => (LLM_PROVIDERS as readonly string[]).includes(id); // …inside the loop: for (const providerId of providerIds) { if (!isKnownProvider(providerId)) continue; // or log + skip const providerKey = providerId; // now typed as LLMProvider … }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/llm/registry/sync.ts` around lines 297 - 301, The loop currently casts providerId to LLMProvider unsafely (providerKey = providerId as LLMProvider) which can produce invalid entries in modelsByProvider if modelsDevApi contains unknown keys; replace the cast with a runtime type guard that checks providerId against the known LLM_PROVIDERS set (e.g., implement isKnownProvider(id: string): id is LLMProvider using LLM_PROVIDERS) and skip or log any unknown keys, then use the narrowed providerId (typed as LLMProvider) when populating modelsByProvider and other code that expects LLMProvider.
276-286: UsePartial<Record<LLMProvider, string>>for compile-time provider-key validation.
Partial<Record<string, string>>accepts any string key, meaning a typo in a provider name (e.g.,'anthropic2') silently has no effect at compile time and at runtime.♻️ Proposed fix
- const defaults: Partial<Record<string, string>> = { + const defaults: Partial<Record<LLMProvider, string>> = {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/llm/registry/sync.ts` around lines 276 - 286, The defaults map currently uses Partial<Record<string, string>> which allows arbitrary string keys; change its type to Partial<Record<LLMProvider, string>> so provider keys are validated at compile time; update the declaration of defaults in sync.ts (the const defaults variable) to use LLMProvider and add/import the LLMProvider type from its module if not already imported, then ensure existing keys (openai, anthropic, google, groq, xai, cohere, minimax, zhipuai, moonshotai) match the LLMProvider union to avoid type errors.packages/webui/components/ModelPicker/ModelCard.tsx (1)
106-113: Consider extracting the cloud-credential providers into a constant.The
noApiKeyNeededcheck now has 7||conditions. A shared set (e.g.,CLOUD_CREDENTIAL_PROVIDERS) that aligns withEXCLUDED_PROVIDERSinApiKeysSection.tsxwould reduce duplication and the risk of the two lists drifting out of sync.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/webui/components/ModelPicker/ModelCard.tsx` around lines 106 - 113, Extract the list of providers that don't require cloud credentials into a shared constant (e.g., CLOUD_CREDENTIAL_PROVIDERS) and replace the long OR chain in ModelCard.tsx (used to compute noApiKeyNeeded) with a membership check like CLOUD_CREDENTIAL_PROVIDERS.includes(provider), then export/import that constant so ApiKeysSection.tsx can reuse it instead of maintaining a separate EXCLUDED_PROVIDERS list—ensure both files reference the single shared constant to keep the two lists in sync.packages/core/src/llm/registry/models.manual.ts (1)
30-30: Prefersatisfiesoverasfor type safety on the outer object.The change from
satisfiestoasremoves TypeScript's structural checks on the outer record. While the innersatisfies ManualModelInfostill validates individual entries, the outerasmeans a typo in a provider key (e.g.,'openai-typo') would not be caught by TypeScript.Since
'openai'is already inLLM_PROVIDERSand the object structure is simple,satisfies Partial<Record<LLMProvider, ModelInfo[]>>should work without issues and is safer.Suggested fix
-} as Partial<Record<LLMProvider, ModelInfo[]>>; +} satisfies Partial<Record<LLMProvider, ModelInfo[]>>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/llm/registry/models.manual.ts` at line 30, Replace the final type assertion "as Partial<Record<LLMProvider, ModelInfo[]>>" on the manual models object with "satisfies Partial<Record<LLMProvider, ModelInfo[]>>" so TypeScript performs structural checks on the outer record; locate the object literal that currently ends with "as Partial<Record<LLMProvider, ModelInfo[]>>" (the manual models mapping) and change that assertion to use satisfies while keeping the existing inner ManualModelInfo validation intact.scripts/sync-llm-models.ts (1)
373-380:includeModelfilter for OpenAI uses broadstartsWith('o')prefix.
modelId.startsWith('o')will include any OpenAI model starting witho, which today means o1/o3/o4-series reasoning models. If models.dev ever adds OpenAI models with ano-prefix that aren't reasoning models (unlikely but possible), they'd slip in. Not a real concern today, just noting the heuristic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/sync-llm-models.ts` around lines 373 - 380, The includeModel function uses a too-broad check for OpenAI models (providerId === 'openai') by using modelId.startsWith('o'); narrow this to only the known reasoning model prefixes (e.g., check for 'o1-', 'o3-', 'o4-' or use a regex like /^o[134]-/) or explicit allowed modelId patterns so non-reasoning models that start with 'o' are excluded; update the condition in includeModel to test these specific prefixes/patterns (reference: includeModel, providerId 'openai', modelId.startsWith) and adjust any related tests or comments accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/cli/src/cli/commands/connect/index.test.ts`:
- Around line 42-53: The test contains API-key-like literal strings (e.g.,
credential.key = 'sk-test-123' in makeProfile) that trigger secret scanners;
replace those with neutral, non-key placeholders (for example
'placeholder-api-key' or 'TEST_API_KEY') in makeProfile and in the other literal
occurrences referenced (the blocks at 66-84 and 115-137) so tests use clearly
non-secret values while preserving the same shape/type of the credential object
and other test data.
In `@packages/cli/src/cli/commands/connect/index.ts`:
- Around line 165-187: Replace the inline p.password prompt in the method.kind
=== 'api_key' branch with the shared API-key setup helper exported from
packages/cli/src/cli/utils/api-key-setup.ts: call that helper to obtain the
validated API key (and handle cancel via isCancel or the helper's cancellation
contract), then pass the returned key into upsertLlmAuthProfile({ profileId,
providerId: provider.providerId, methodId: method.id, label: method.label,
credential: { type: 'api_key', key: String(key).trim() } }) and follow with
setDefaultLlmAuthProfile({ providerId: provider.providerId, profileId }); ensure
you preserve the existing validation/short-key checks and cancellation flow but
centralize them in the shared helper to keep the UX consistent.
In `@packages/cli/src/cli/utils/api-key-verification.ts`:
- Around line 46-50: The switch currently routes 'zai' and 'zai-coding-plan' to
verifyGLM (which hits the mainland ZhipuAI endpoint), causing international Z.AI
keys to fail; implement a new verifyZAI(apiKey: string) function that mirrors
verifyGLM's signature/behavior but targets the international base URL
(https://api.z.ai/api/paas/v4/) and uses the same request/response handling and
return type as verifyGLM, then update the switch so the 'zai' and
'zai-coding-plan' cases call verifyZAI(...) instead of verifyGLM(...).
In `@packages/core/src/llm/registry/sync.ts`:
- Around line 322-324: The loop that assigns empty arrays to overlay IDs
currently overwrites any existing dynamic entries; change the assignment in the
loop that iterates overlayProviderIds so it only initializes when absent (use
nullish-coalescing assignment semantics) instead of unconditional
overwrite—update the code that assigns to modelsByProvider[overlayId] to
initialize only if modelsByProvider[overlayId] is undefined/null (so existing
entries built earlier in the sync logic are preserved).
- Around line 312-316: The amazon-bedrock normalization currently strips only
"eu.", "us.", and "global." prefixes in the models mapping (inside the
providerId === 'amazon-bedrock' block that assigns
modelsByProvider[providerKey]) which misses "apac." and breaks deduplication;
update the replacement regex used in the map callback for m.name (the
m.name.replace(...) call) to also match and remove the "apac." prefix (e.g.,
include apac\. in the alternation) so region-prefixed IDs normalize consistently
before the filter/sort steps.
In `@packages/core/src/llm/services/factory.ts`:
- Around line 58-60: Extract the multi-source fallback logic into a dedicated
resolver function (e.g., resolveRuntimeConfig or resolveCredentials) that
centralizes precedence and returns a concrete config object; replace the inline
chains around resolveRuntimeAuth / runtimeAuth, the apiKey assignment (currently
using llmConfig.apiKey || runtimeAuth?.apiKey ||
resolveApiKeyForProvider(provider)) and extraHeaders with a single call to that
resolver, and move any baseURL resolution (lines ~84-87) into the same resolver
so the selection order is documented and backward-compatibility fallbacks are
isolated in one place.
In `@packages/core/src/utils/api-key-resolver.ts`:
- Around line 11-56: getApiKeyEnvVarsForProvider currently returns [] for
providers not in the models.dev snapshot (PROVIDERS_BY_ID), so the Dexto gateway
provider 'dexto-nova' never gets its fallback API key; update
getApiKeyEnvVarsForProvider to include 'dexto-nova' mapping (similar to
EXTRA_API_KEY_ENV_VARS entries) so DEXTO_API_KEY is returned — either add
'dexto-nova': ['DEXTO_API_KEY'] to the EXTRA_API_KEY_ENV_VARS constant or handle
provider === 'dexto-nova' explicitly before returning the snapshot-derived list,
ensuring uniqueInOrder([...filtered, ...extras]) will include 'DEXTO_API_KEY'.
In `@scripts/sync-llm-models.ts`:
- Around line 395-404: In the amazon-bedrock branch where modelsByProvider is
set, the regex used to strip region prefixes in the map callback is
double-escaped (`/^(eu\\.|us\\.|global\\.)/i`) so it never matches; update the
pattern in that map (the function that returns `{ ...m, name: m.name.replace(...
) }`) to use a proper regex that matches literal dots, e.g.
`/^(eu\.|us\.|global\.)/i`, then keep the existing de-duplication (filter) and
sort logic unchanged so region-prefixed IDs like "eu.anthropic..." normalize to
"anthropic..." correctly.
---
Outside diff comments:
In
`@packages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsx`:
- Around line 241-263: The loop assumes providerModels = allModels[provider] is
defined; add a defensive guard before the iteration (e.g., check providerModels
!== undefined && providerModels.length > 0) and skip continuing when it's
missing so the for (const model of providerModels) loop in
ModelSelectorRefactored.tsx won't throw; specifically protect the providerModels
lookup for providers like "amazon-bedrock" and "google-vertex" (and any newly
unblocked provider) after the existing skip/continue checks, referencing the
allModels map and the getSupportedModels() data source.
In `@packages/core/src/llm/executor/provider-options.ts`:
- Around line 64-70: The returned providerOptions object for the branch where
provider === 'google-vertex-anthropic' uses the wrong key; change the top-level
key from 'vertex-anthropic' to 'anthropic' in the object literal returned by
that branch so the providerOptions shape matches `@ai-sdk/google-vertex`
expectations (i.e., the object returned by the provider ===
'google-vertex-anthropic' branch should have an anthropic property containing
cacheControl and sendReasoning).
In `@packages/core/src/llm/services/factory.ts`:
- Around line 384-395: Refactor createLLMService to replace the two optional
positional params compactionStrategy and authResolver with a single optional
options object (e.g., options?: { compactionStrategy?: CompactionStrategy |
null; authResolver?: LlmAuthResolver | null }) and update internal usage to read
from options.compactionStrategy and options.authResolver with sensible defaults;
update the JSDoc to rename `@param` compactionConfig to `@param` options and
document the two fields, adjust the function signature and any callers
accordingly to pass an options object instead of positional args, and ensure the
exported return type and other required imports (CompactionStrategy,
LlmAuthResolver) remain intact.
In `@scripts/sync-llm-models.ts`:
- Around line 311-314: The error message thrown when modelsDevProvider is
missing is misleadingly hard-coded to "dexto-nova"; update the throw in the
block that checks modelsDevProvider to use a generic message referencing
modelsDevProviderId and the current provider variable (e.g., "models.dev
provider '${modelsDevProviderId}' not found (needed for provider
'${provider}')") so it correctly reflects any provider passed into this
function/loop where modelsDevProvider, modelsDevProviderId and provider are
used.
---
Duplicate comments:
In `@packages/agent-management/src/utils/api-key-resolver.ts`:
- Around line 1-5: The core provider-to-env-var mapping is missing fallback env
vars for certain providers; update the PROVIDER_API_KEY_MAP in packages/core's
api-key-resolver so that 'zai' and 'zai-coding-plan' include 'ZHIPUAI_API_KEY'
as a fallback and 'kimi-for-coding' includes 'MOONSHOT_API_KEY' as a fallback;
ensure these keys are returned by getApiKeyEnvVarsForProvider and thus picked up
by getPrimaryApiKeyEnvVar/resolveApiKeyForProvider so the re-exports in
api-key-resolver.ts (getApiKeyEnvVarsForProvider, getPrimaryApiKeyEnvVar,
resolveApiKeyForProvider) will surface the correct env var fallbacks.
In `@packages/cli/src/cli/commands/connect/index.ts`:
- Around line 42-45: The function handleConnectCommand currently declares
options as optional; make it a required parameter with a default value to avoid
repeated undefined checks: change the signature to accept options: {
interactive?: boolean } = {} (i.e., remove the optional ? from the parameter
itself and provide an empty-object default), then update the body to use
options.interactive directly (keep the existing conditional for
options.interactive === false) and adjust any callers to stop passing undefined
where applicable; reference handleConnectCommand so you can locate and update
all usages and the parameter definition accordingly.
- Around line 69-71: The code uses a non-null assertion on provider.methods[0]
when computing methodId; remove the '!' and handle the potential undefined
safely by either using optional chaining (provider.methods[0]?.id) with a clear
fallback or by narrowing first (e.g., const first = provider.methods[0]; if
(!first) throw/new error or handle) before reading .id; update the expression
that assigns methodId (reference: methodId and provider.methods) so TypeScript
is satisfied without the non-null assertion and you explicitly handle the
zero-length case.
- Around line 377-381: There's a duplicated guard around calling open() in the
guided setup: locate the conditional that checks p.isCancel(shouldOpenDocs) &&
shouldOpenDocs and the nested check for provider.modelsDevProviderId that calls
open(`https://models.dev/${provider.modelsDevProviderId}`), then remove or
consolidate the duplicate conditional so open() is only invoked once; keep the
necessary truthiness checks (preserve p.isCancel/shouldOpenDocs and
provider.modelsDevProviderId checks) and ensure only a single code path calls
open().
---
Nitpick comments:
In
`@packages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsx`:
- Line 810: The variable hasCustomModels is computed from customModels but never
used; remove the dead variable declaration (hasCustomModels) from
ModelSelectorRefactored.tsx so there is no unused constant. Locate the const
hasCustomModels = customModels.length > 0; line and delete it, leaving any logic
that uses customModels directly intact.
In `@packages/core/src/llm/registry/models.manual.ts`:
- Line 30: Replace the final type assertion "as Partial<Record<LLMProvider,
ModelInfo[]>>" on the manual models object with "satisfies
Partial<Record<LLMProvider, ModelInfo[]>>" so TypeScript performs structural
checks on the outer record; locate the object literal that currently ends with
"as Partial<Record<LLMProvider, ModelInfo[]>>" (the manual models mapping) and
change that assertion to use satisfies while keeping the existing inner
ManualModelInfo validation intact.
In `@packages/core/src/llm/registry/sync.ts`:
- Around line 288-295: The includeModel function should be moved out of
buildModelsByProviderFromParsedSources to module scope since it captures no
closed-over state; relocate the function declaration (includeModel) to top-level
in the module and update all references. Also tighten the OpenAI filter: replace
the broad modelId.startsWith('o') check inside includeModel with explicit
allowed prefixes for OpenAI reasoning series (e.g., modelId.startsWith('o1') ||
modelId.startsWith('o3') || modelId.startsWith('o4')) or otherwise document the
chosen forward-compatibility strategy in a comment so the intent is clear.
- Around line 297-301: The loop currently casts providerId to LLMProvider
unsafely (providerKey = providerId as LLMProvider) which can produce invalid
entries in modelsByProvider if modelsDevApi contains unknown keys; replace the
cast with a runtime type guard that checks providerId against the known
LLM_PROVIDERS set (e.g., implement isKnownProvider(id: string): id is
LLMProvider using LLM_PROVIDERS) and skip or log any unknown keys, then use the
narrowed providerId (typed as LLMProvider) when populating modelsByProvider and
other code that expects LLMProvider.
- Around line 276-286: The defaults map currently uses Partial<Record<string,
string>> which allows arbitrary string keys; change its type to
Partial<Record<LLMProvider, string>> so provider keys are validated at compile
time; update the declaration of defaults in sync.ts (the const defaults
variable) to use LLMProvider and add/import the LLMProvider type from its module
if not already imported, then ensure existing keys (openai, anthropic, google,
groq, xai, cohere, minimax, zhipuai, moonshotai) match the LLMProvider union to
avoid type errors.
In `@packages/webui/components/ModelPicker/ModelCard.tsx`:
- Around line 106-113: Extract the list of providers that don't require cloud
credentials into a shared constant (e.g., CLOUD_CREDENTIAL_PROVIDERS) and
replace the long OR chain in ModelCard.tsx (used to compute noApiKeyNeeded) with
a membership check like CLOUD_CREDENTIAL_PROVIDERS.includes(provider), then
export/import that constant so ApiKeysSection.tsx can reuse it instead of
maintaining a separate EXCLUDED_PROVIDERS list—ensure both files reference the
single shared constant to keep the two lists in sync.
In `@scripts/sync-llm-models.ts`:
- Around line 373-380: The includeModel function uses a too-broad check for
OpenAI models (providerId === 'openai') by using modelId.startsWith('o'); narrow
this to only the known reasoning model prefixes (e.g., check for 'o1-', 'o3-',
'o4-' or use a regex like /^o[134]-/) or explicit allowed modelId patterns so
non-reasoning models that start with 'o' are excluded; update the condition in
includeModel to test these specific prefixes/patterns (reference: includeModel,
providerId 'openai', modelId.startsWith) and adjust any related tests or
comments accordingly.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (45)
AGENTS.mddocs/static/openapi/openapi.jsonfeature-plans/oauth-and-more-providers/PLAN.mdfeature-plans/oauth-and-more-providers/UPDATED_DIRECTION.mdfeature-plans/oauth-and-more-providers/USER_VERIFICATION.mdfeature-plans/oauth-and-more-providers/WORKING_MEMORY.mdpackage.jsonpackages/agent-management/src/auth/connect-catalog.tspackages/agent-management/src/index.tspackages/agent-management/src/models/custom-models.tspackages/agent-management/src/utils/api-key-resolver.tspackages/agent-management/src/utils/api-key-store.tspackages/cli/src/cli/commands/connect/index.test.tspackages/cli/src/cli/commands/connect/index.tspackages/cli/src/cli/commands/setup.tspackages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsxpackages/cli/src/cli/ink-cli/components/overlays/custom-model-wizard/provider-config.tspackages/cli/src/cli/ink-cli/utils/llm-provider-display.tspackages/cli/src/cli/utils/api-key-verification.tspackages/cli/src/cli/utils/provider-setup.tspackages/cli/src/config/cli-overrides.tspackages/core/src/llm/curation-config.tspackages/core/src/llm/executor/provider-options.tspackages/core/src/llm/executor/stream-processor.tspackages/core/src/llm/formatters/vercel.tspackages/core/src/llm/providers.generated.tspackages/core/src/llm/registry/auto-update.test.tspackages/core/src/llm/registry/auto-update.tspackages/core/src/llm/registry/index.test.tspackages/core/src/llm/registry/index.tspackages/core/src/llm/registry/models.generated.tspackages/core/src/llm/registry/models.manual.tspackages/core/src/llm/registry/sync.tspackages/core/src/llm/resolver.tspackages/core/src/llm/services/factory.tspackages/core/src/llm/services/test-utils.integration.tspackages/core/src/llm/types.tspackages/core/src/utils/api-key-resolver.tspackages/webui/components/ModelPicker/CompactModelCard.tsxpackages/webui/components/ModelPicker/CustomModelForms.tsxpackages/webui/components/ModelPicker/ModelCard.tsxpackages/webui/components/ModelPicker/ModelPickerModal.tsxpackages/webui/components/ModelPicker/constants.tsxpackages/webui/components/settings/sections/ApiKeysSection.tsxscripts/sync-llm-models.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/webui/components/ModelPicker/constants.tsx
| function makeProfile(params: { | ||
| profileId: string; | ||
| providerId: string; | ||
| methodId: string; | ||
| }): LlmAuthProfile { | ||
| return { | ||
| profileId: params.profileId, | ||
| providerId: params.providerId, | ||
| methodId: params.methodId, | ||
| label: 'Test label', | ||
| credential: { type: 'api_key', key: 'sk-test-123' }, | ||
| createdAt: Date.now(), |
There was a problem hiding this comment.
Replace API‑key‑like literals to avoid secret‑scanning failures.
Gitleaks flagged these values; even in tests this can fail CI. Use neutral placeholders that don’t match key patterns.
🔧 Suggested fix
- credential: { type: 'api_key', key: 'sk-test-123' },
+ credential: { type: 'api_key', key: 'test-api-key' },
@@
- vi.mocked(prompts.password).mockResolvedValueOnce('sk-live-1234567890');
+ vi.mocked(prompts.password).mockResolvedValueOnce('test-api-key-1');
@@
- credential: { type: 'api_key', key: 'sk-live-1234567890' },
+ credential: { type: 'api_key', key: 'test-api-key-1' },
@@
- vi.mocked(prompts.password).mockResolvedValueOnce('sk-new-1234567890');
+ vi.mocked(prompts.password).mockResolvedValueOnce('test-api-key-2');
@@
- credential: { type: 'api_key', key: 'sk-new-1234567890' },
+ credential: { type: 'api_key', key: 'test-api-key-2' },Also applies to: 66-84, 115-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/cli/commands/connect/index.test.ts` around lines 42 - 53,
The test contains API-key-like literal strings (e.g., credential.key =
'sk-test-123' in makeProfile) that trigger secret scanners; replace those with
neutral, non-key placeholders (for example 'placeholder-api-key' or
'TEST_API_KEY') in makeProfile and in the other literal occurrences referenced
(the blocks at 66-84 and 115-137) so tests use clearly non-secret values while
preserving the same shape/type of the credential object and other test data.
| if (method.kind === 'api_key') { | ||
| const apiKey = await p.password({ | ||
| message: `Enter ${provider.label} API key`, | ||
| validate: (value) => { | ||
| if (!value?.trim()) return 'API key is required'; | ||
| if (value.trim().length < 10) return 'API key looks unusually short'; | ||
| return undefined; | ||
| }, | ||
| }); | ||
| if (isCancel(apiKey)) { | ||
| p.cancel('Connect cancelled'); | ||
| return; | ||
| } | ||
|
|
||
| await upsertLlmAuthProfile({ | ||
| profileId, | ||
| providerId: provider.providerId, | ||
| methodId: method.id, | ||
| label: method.label, | ||
| credential: { type: 'api_key', key: String(apiKey).trim() }, | ||
| }); | ||
| await setDefaultLlmAuthProfile({ providerId: provider.providerId, profileId }); | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Route API‑key setup through the shared helper.
The manual prompt here bypasses the context-aware UX in packages/cli/src/cli/utils/api-key-setup.ts; please use that helper for the API-key flow so the setup experience stays consistent.
As per coding guidelines, Use context-aware setup UX from packages/cli/src/cli/utils/api-key-setup.ts for context-dependent setup flows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/cli/commands/connect/index.ts` around lines 165 - 187,
Replace the inline p.password prompt in the method.kind === 'api_key' branch
with the shared API-key setup helper exported from
packages/cli/src/cli/utils/api-key-setup.ts: call that helper to obtain the
validated API key (and handle cancel via isCancel or the helper's cancellation
contract), then pass the returned key into upsertLlmAuthProfile({ profileId,
providerId: provider.providerId, methodId: method.id, label: method.label,
credential: { type: 'api_key', key: String(key).trim() } }) and follow with
setDefaultLlmAuthProfile({ providerId: provider.providerId, profileId }); ensure
you preserve the existing validation/short-key checks and cancellation flow but
centralize them in the shared helper to keep the UX consistent.
| case 'zhipuai': | ||
| case 'zhipuai-coding-plan': | ||
| case 'zai': | ||
| case 'zai-coding-plan': | ||
| return await verifyGLM(apiKey); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
ZAI ZhipuAI international API endpoint documentation
💡 Result:
ZhipuAI (ZAI) “BigModel” public API docs (incl. the international-facing HTTPS endpoints) are on 智谱AI开放文档:
- Documentation home:
https://docs.bigmodel.cn/[1] - Chat Completions API reference (shows the full request URL):
POST https://open.bigmodel.cn/api/paas/v4/chat/completions[2]
SDKs typically configure the same base URL, e.g. the official Python SDK example uses:
base_url="https://open.bigmodel.cn/api/paas/v4/" [3]
Sources: [1] https://docs.bigmodel.cn/ , [2] https://docs.bigmodel.cn/api-reference , [3] https://github.com/MetaGLM/zhipuai-sdk-python-v4
🌐 Web query:
ZAI api.zai.chat API base URL
💡 Result:
The Z.AI (GLM) API base URL is:
- Overseas:
https://api.z.ai/api/paas/v4/[1][2] - Mainland China (Zhipu):
https://open.bigmodel.cn/api/paas/v4/[1]
I couldn’t find an official Z.AI base URL on the api.zai.chat domain; the official docs/SDKs point to api.z.ai instead. [1][2]
Sources: [1] [2] [3]
🏁 Script executed:
# Search for any references to ZAI or zai endpoints in the codebase
rg -i "zai|zhipuai" --type ts --type tsx -B 2 -A 2 | grep -i "api\|endpoint\|url\|host"Repository: truffle-ai/dexto
Length of output: 87
🏁 Script executed:
# Look at the verifyGLM function to understand the current implementation
sed -n '257,271p' packages/cli/src/cli/utils/api-key-verification.tsRepository: truffle-ai/dexto
Length of output: 506
🏁 Script executed:
# Check if there are any other endpoint configurations in the codebase
fd -e ts -e tsx | xargs grep -l "open.bigmodel.cn\|api.zai" 2>/dev/null | head -20Repository: truffle-ai/dexto
Length of output: 294
🏁 Script executed:
# Check provider-setup.ts for ZAI/ZhipuAI configuration context
head -100 packages/cli/src/cli/utils/provider-setup.tsRepository: truffle-ai/dexto
Length of output: 2993
🏁 Script executed:
# Check the generated providers file to see all provider definitions
rg "zai|zhipuai" packages/core/src/llm/providers.generated.ts -B 2 -A 2Repository: truffle-ai/dexto
Length of output: 872
🏁 Script executed:
# Search for any existing separate handler for ZAI vs ZhipuAI
rg "verifyZAI|verifyZhipuai" packages/cli/src/cli/utils/api-key-verification.tsRepository: truffle-ai/dexto
Length of output: 42
Create a separate verifyZAI() function for Z.AI international API endpoint.
Z.AI (international platform) uses https://api.z.ai/api/paas/v4/ while ZhipuAI (mainland) uses https://open.bigmodel.cn/api/paas/v4/. Currently, zai and zai-coding-plan are routed to verifyGLM(), which calls the mainland endpoint. This causes Z.AI keys to fail verification incorrectly.
🔧 Required fix
+async function verifyZAI(apiKey: string): Promise<VerificationResult> {
+ const response = await fetch('https://api.z.ai/api/paas/v4/models', {
+ method: 'GET',
+ headers: { Authorization: `Bearer ${apiKey}` },
+ });
+ if (response.ok) {
+ return { success: true, modelUsed: 'models-list' };
+ }
+ const error = await parseErrorResponse(response);
+ return { success: false, error };
+}
case 'zhipuai':
case 'zhipuai-coding-plan':
return await verifyGLM(apiKey);
+ case 'zai':
+ case 'zai-coding-plan':
+ return await verifyZAI(apiKey);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/cli/utils/api-key-verification.ts` around lines 46 - 50, The
switch currently routes 'zai' and 'zai-coding-plan' to verifyGLM (which hits the
mainland ZhipuAI endpoint), causing international Z.AI keys to fail; implement a
new verifyZAI(apiKey: string) function that mirrors verifyGLM's
signature/behavior but targets the international base URL
(https://api.z.ai/api/paas/v4/) and uses the same request/response handling and
return type as verifyGLM, then update the switch so the 'zai' and
'zai-coding-plan' cases call verifyZAI(...) instead of verifyGLM(...).
| if (providerId === 'amazon-bedrock') { | ||
| modelsByProvider[providerKey] = models | ||
| .map((m) => ({ ...m, name: m.name.replace(/^(eu\.|us\.|global\.)/i, '') })) | ||
| .filter((m, idx, arr) => arr.findIndex((x) => x.name === m.name) === idx) | ||
| .sort((a, b) => a.name.localeCompare(b.name)); |
There was a problem hiding this comment.
Missing apac. prefix in the amazon-bedrock normalisation regex.
Amazon Bedrock cross-region inference supports three geographic prefixes: us., eu., and apac. Real model IDs use these as us., eu., and apac. prefixes preceding the model ID. The current regex only strips eu., us., and global.. Any apac.* variant from models.dev will keep its prefix intact, bypassing deduplication and appearing as a separate entry (e.g., apac.anthropic.claude-3-haiku-… alongside the stripped anthropic.claude-3-haiku-…).
🐛 Proposed fix
- .map((m) => ({ ...m, name: m.name.replace(/^(eu\.|us\.|global\.)/i, '') }))
+ .map((m) => ({ ...m, name: m.name.replace(/^(eu\.|us\.|apac\.|global\.)/i, '') }))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (providerId === 'amazon-bedrock') { | |
| modelsByProvider[providerKey] = models | |
| .map((m) => ({ ...m, name: m.name.replace(/^(eu\.|us\.|global\.)/i, '') })) | |
| .filter((m, idx, arr) => arr.findIndex((x) => x.name === m.name) === idx) | |
| .sort((a, b) => a.name.localeCompare(b.name)); | |
| if (providerId === 'amazon-bedrock') { | |
| modelsByProvider[providerKey] = models | |
| .map((m) => ({ ...m, name: m.name.replace(/^(eu\.|us\.|apac\.|global\.)/i, '') })) | |
| .filter((m, idx, arr) => arr.findIndex((x) => x.name === m.name) === idx) | |
| .sort((a, b) => a.name.localeCompare(b.name)); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/llm/registry/sync.ts` around lines 312 - 316, The
amazon-bedrock normalization currently strips only "eu.", "us.", and "global."
prefixes in the models mapping (inside the providerId === 'amazon-bedrock' block
that assigns modelsByProvider[providerKey]) which misses "apac." and breaks
deduplication; update the replacement regex used in the map callback for m.name
(the m.name.replace(...) call) to also match and remove the "apac." prefix
(e.g., include apac\. in the alternation) so region-prefixed IDs normalize
consistently before the filter/sort steps.
| for (const overlayId of overlayProviderIds) { | ||
| modelsByProvider[overlayId] = []; | ||
| } |
There was a problem hiding this comment.
Unconditional overlay assignment silently overwrites models if an overlay ID ever appears in models.dev.
modelsByProvider[overlayId] = [] runs after the main loop for every entry in overlayProviderIds. If any of those IDs (e.g., ollama, local) were ever added to models.dev, their dynamically built model list would be silently replaced with []. Using ??= makes the intent of "initialise only if absent" explicit and prevents a hard-to-debug regression.
🐛 Proposed fix
for (const overlayId of overlayProviderIds) {
- modelsByProvider[overlayId] = [];
+ modelsByProvider[overlayId] ??= [];
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (const overlayId of overlayProviderIds) { | |
| modelsByProvider[overlayId] = []; | |
| } | |
| for (const overlayId of overlayProviderIds) { | |
| modelsByProvider[overlayId] ??= []; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/llm/registry/sync.ts` around lines 322 - 324, The loop that
assigns empty arrays to overlay IDs currently overwrites any existing dynamic
entries; change the assignment in the loop that iterates overlayProviderIds so
it only initializes when absent (use nullish-coalescing assignment semantics)
instead of unconditional overwrite—update the code that assigns to
modelsByProvider[overlayId] to initialize only if modelsByProvider[overlayId] is
undefined/null (so existing entries built earlier in the sync logic are
preserved).
| const runtimeAuth = context?.authResolver?.resolveRuntimeAuth({ provider, model }) ?? null; | ||
| const apiKey = llmConfig.apiKey || runtimeAuth?.apiKey || resolveApiKeyForProvider(provider); | ||
| const extraHeaders = runtimeAuth?.headers; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Extract runtime/config/env resolution into a dedicated resolver.
Inlining multi-source fallback chains for apiKey and baseURL makes precedence harder to reason about; a small resolveX() helper with a documented order keeps this boundary explicit.
As per coding guidelines, Avoid multi-source values encoded as optional + fallback chains (e.g., a ?? b ?? c across multiple layers); prefer single source of truth for a value. If backward compatibility is required, isolate it in a single resolveX() function at the boundary that returns a non-optional value and document the selection order.
♻️ Suggested refactor
- const runtimeAuth = context?.authResolver?.resolveRuntimeAuth({ provider, model }) ?? null;
- const apiKey = llmConfig.apiKey || runtimeAuth?.apiKey || resolveApiKeyForProvider(provider);
+ const runtimeAuth = context?.authResolver?.resolveRuntimeAuth({ provider, model }) ?? null;
+ const resolveApiKey = () => {
+ if (llmConfig.apiKey?.trim()) return llmConfig.apiKey.trim();
+ if (runtimeAuth?.apiKey?.trim()) return runtimeAuth.apiKey.trim();
+ return resolveApiKeyForProvider(provider);
+ };
+ const apiKey = resolveApiKey();
@@
- const compatibleBaseURL =
- baseURL?.replace(/\/$/, '') ||
- runtimeBaseURL?.replace(/\/$/, '') ||
- process.env.OPENAI_BASE_URL?.replace(/\/$/, '');
+ const resolveOpenAiCompatibleBaseURL = () => {
+ const candidate =
+ baseURL?.trim() ||
+ runtimeBaseURL?.trim() ||
+ process.env.OPENAI_BASE_URL?.trim();
+ return candidate ? candidate.replace(/\/$/, '') : undefined;
+ };
+ const compatibleBaseURL = resolveOpenAiCompatibleBaseURL();Also applies to: 84-87
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/llm/services/factory.ts` around lines 58 - 60, Extract the
multi-source fallback logic into a dedicated resolver function (e.g.,
resolveRuntimeConfig or resolveCredentials) that centralizes precedence and
returns a concrete config object; replace the inline chains around
resolveRuntimeAuth / runtimeAuth, the apiKey assignment (currently using
llmConfig.apiKey || runtimeAuth?.apiKey || resolveApiKeyForProvider(provider))
and extraHeaders with a single call to that resolver, and move any baseURL
resolution (lines ~84-87) into the same resolver so the selection order is
documented and backward-compatibility fallbacks are isolated in one place.
| const EXTRA_API_KEY_ENV_VARS: Record<string, string[]> = { | ||
| openai: ['OPENAI_KEY'], | ||
| anthropic: ['ANTHROPIC_KEY', 'CLAUDE_API_KEY'], | ||
| google: ['GOOGLE_API_KEY', 'GEMINI_API_KEY'], | ||
| xai: ['X_AI_API_KEY'], | ||
| zhipuai: ['ZHIPUAI_API_KEY'], | ||
| } as const; | ||
|
|
||
| function isLikelyApiKeyEnvVar(name: string): boolean { | ||
| return /(API_KEY|API_TOKEN|TOKEN|SECRET)\b/i.test(name); | ||
| } | ||
|
|
||
| function uniqueInOrder(values: string[]): string[] { | ||
| const seen = new Set<string>(); | ||
| const result: string[] = []; | ||
| for (const value of values) { | ||
| if (seen.has(value)) continue; | ||
| seen.add(value); | ||
| result.push(value); | ||
| } | ||
| return result; | ||
| } | ||
|
|
||
| export function getApiKeyEnvVarsForProvider(provider: LLMProvider): string[] { | ||
| // Providers that don't use a single API key string | ||
| if ( | ||
| provider === 'google-vertex' || | ||
| provider === 'google-vertex-anthropic' || | ||
| provider === 'local' || | ||
| provider === 'ollama' | ||
| ) { | ||
| return []; | ||
| } | ||
|
|
||
| // Bedrock supports an optional bearer token for the Converse API. | ||
| // When absent, auth is handled via AWS credential chain (env/roles/SSO). | ||
| if (provider === 'amazon-bedrock') { | ||
| return ['AWS_BEARER_TOKEN_BEDROCK']; | ||
| } | ||
|
|
||
| const snapshot = PROVIDERS_BY_ID[provider]; | ||
| const fromSnapshot = snapshot?.env ?? []; | ||
| const filtered = fromSnapshot.filter(isLikelyApiKeyEnvVar); | ||
| const extras = EXTRA_API_KEY_ENV_VARS[provider] ?? []; | ||
| return uniqueInOrder([...filtered, ...extras]); | ||
| } |
There was a problem hiding this comment.
Add DEXTO_API_KEY fallback for dexto-nova.
Since the provider snapshot is models.dev–derived, dexto-nova won’t appear there, so getApiKeyEnvVarsForProvider returns [] and resolveApiKeyForProvider can never find DEXTO_API_KEY. Add an explicit fallback.
🔧 Suggested fix
const EXTRA_API_KEY_ENV_VARS: Record<string, string[]> = {
openai: ['OPENAI_KEY'],
anthropic: ['ANTHROPIC_KEY', 'CLAUDE_API_KEY'],
google: ['GOOGLE_API_KEY', 'GEMINI_API_KEY'],
xai: ['X_AI_API_KEY'],
zhipuai: ['ZHIPUAI_API_KEY'],
+ 'dexto-nova': ['DEXTO_API_KEY'],
} as const;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const EXTRA_API_KEY_ENV_VARS: Record<string, string[]> = { | |
| openai: ['OPENAI_KEY'], | |
| anthropic: ['ANTHROPIC_KEY', 'CLAUDE_API_KEY'], | |
| google: ['GOOGLE_API_KEY', 'GEMINI_API_KEY'], | |
| xai: ['X_AI_API_KEY'], | |
| zhipuai: ['ZHIPUAI_API_KEY'], | |
| } as const; | |
| function isLikelyApiKeyEnvVar(name: string): boolean { | |
| return /(API_KEY|API_TOKEN|TOKEN|SECRET)\b/i.test(name); | |
| } | |
| function uniqueInOrder(values: string[]): string[] { | |
| const seen = new Set<string>(); | |
| const result: string[] = []; | |
| for (const value of values) { | |
| if (seen.has(value)) continue; | |
| seen.add(value); | |
| result.push(value); | |
| } | |
| return result; | |
| } | |
| export function getApiKeyEnvVarsForProvider(provider: LLMProvider): string[] { | |
| // Providers that don't use a single API key string | |
| if ( | |
| provider === 'google-vertex' || | |
| provider === 'google-vertex-anthropic' || | |
| provider === 'local' || | |
| provider === 'ollama' | |
| ) { | |
| return []; | |
| } | |
| // Bedrock supports an optional bearer token for the Converse API. | |
| // When absent, auth is handled via AWS credential chain (env/roles/SSO). | |
| if (provider === 'amazon-bedrock') { | |
| return ['AWS_BEARER_TOKEN_BEDROCK']; | |
| } | |
| const snapshot = PROVIDERS_BY_ID[provider]; | |
| const fromSnapshot = snapshot?.env ?? []; | |
| const filtered = fromSnapshot.filter(isLikelyApiKeyEnvVar); | |
| const extras = EXTRA_API_KEY_ENV_VARS[provider] ?? []; | |
| return uniqueInOrder([...filtered, ...extras]); | |
| } | |
| const EXTRA_API_KEY_ENV_VARS: Record<string, string[]> = { | |
| openai: ['OPENAI_KEY'], | |
| anthropic: ['ANTHROPIC_KEY', 'CLAUDE_API_KEY'], | |
| google: ['GOOGLE_API_KEY', 'GEMINI_API_KEY'], | |
| xai: ['X_AI_API_KEY'], | |
| zhipuai: ['ZHIPUAI_API_KEY'], | |
| 'dexto-nova': ['DEXTO_API_KEY'], | |
| } as const; | |
| function isLikelyApiKeyEnvVar(name: string): boolean { | |
| return /(API_KEY|API_TOKEN|TOKEN|SECRET)\b/i.test(name); | |
| } | |
| function uniqueInOrder(values: string[]): string[] { | |
| const seen = new Set<string>(); | |
| const result: string[] = []; | |
| for (const value of values) { | |
| if (seen.has(value)) continue; | |
| seen.add(value); | |
| result.push(value); | |
| } | |
| return result; | |
| } | |
| export function getApiKeyEnvVarsForProvider(provider: LLMProvider): string[] { | |
| // Providers that don't use a single API key string | |
| if ( | |
| provider === 'google-vertex' || | |
| provider === 'google-vertex-anthropic' || | |
| provider === 'local' || | |
| provider === 'ollama' | |
| ) { | |
| return []; | |
| } | |
| // Bedrock supports an optional bearer token for the Converse API. | |
| // When absent, auth is handled via AWS credential chain (env/roles/SSO). | |
| if (provider === 'amazon-bedrock') { | |
| return ['AWS_BEARER_TOKEN_BEDROCK']; | |
| } | |
| const snapshot = PROVIDERS_BY_ID[provider]; | |
| const fromSnapshot = snapshot?.env ?? []; | |
| const filtered = fromSnapshot.filter(isLikelyApiKeyEnvVar); | |
| const extras = EXTRA_API_KEY_ENV_VARS[provider] ?? []; | |
| return uniqueInOrder([...filtered, ...extras]); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/utils/api-key-resolver.ts` around lines 11 - 56,
getApiKeyEnvVarsForProvider currently returns [] for providers not in the
models.dev snapshot (PROVIDERS_BY_ID), so the Dexto gateway provider
'dexto-nova' never gets its fallback API key; update getApiKeyEnvVarsForProvider
to include 'dexto-nova' mapping (similar to EXTRA_API_KEY_ENV_VARS entries) so
DEXTO_API_KEY is returned — either add 'dexto-nova': ['DEXTO_API_KEY'] to the
EXTRA_API_KEY_ENV_VARS constant or handle provider === 'dexto-nova' explicitly
before returning the snapshot-derived list, ensuring uniqueInOrder([...filtered,
...extras]) will include 'DEXTO_API_KEY'.
| if (providerId === 'amazon-bedrock') { | ||
| modelsByProvider[providerId] = models | ||
| // Normalize AWS region-prefixed IDs to the canonical unprefixed ID. | ||
| .map((m) => ({ ...m, name: m.name.replace(/^(eu\\.|us\\.|global\\.)/i, '') })) | ||
| // De-dupe after stripping prefixes. | ||
| .filter((m, idx, arr) => arr.findIndex((x) => x.name === m.name) === idx) | ||
| .sort((a, b) => a.name.localeCompare(b.name)); | ||
| } else { | ||
| modelsByProvider[providerId] = models; | ||
| } |
There was a problem hiding this comment.
Bug: Double-escaped backslashes in regex literal — Bedrock prefix stripping is a no-op.
The regex /^(eu\\.|us\\.|global\\.)/i uses \\ inside a regex literal, which matches a literal backslash followed by any character. Bedrock model IDs use dots (e.g., eu.anthropic.claude-...), not backslashes, so this regex never matches and the normalization silently does nothing.
Compare with other regex in this file (e.g., line 652's /-\d{8}/) which use single backslashes correctly.
🐛 Proposed fix
- .map((m) => ({ ...m, name: m.name.replace(/^(eu\\.|us\\.|global\\.)/i, '') }))
+ .map((m) => ({ ...m, name: m.name.replace(/^(eu\.|us\.|global\.)/i, '') }))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (providerId === 'amazon-bedrock') { | |
| modelsByProvider[providerId] = models | |
| // Normalize AWS region-prefixed IDs to the canonical unprefixed ID. | |
| .map((m) => ({ ...m, name: m.name.replace(/^(eu\\.|us\\.|global\\.)/i, '') })) | |
| // De-dupe after stripping prefixes. | |
| .filter((m, idx, arr) => arr.findIndex((x) => x.name === m.name) === idx) | |
| .sort((a, b) => a.name.localeCompare(b.name)); | |
| } else { | |
| modelsByProvider[providerId] = models; | |
| } | |
| if (providerId === 'amazon-bedrock') { | |
| modelsByProvider[providerId] = models | |
| // Normalize AWS region-prefixed IDs to the canonical unprefixed ID. | |
| .map((m) => ({ ...m, name: m.name.replace(/^(eu\.|us\.|global\.)/i, '') })) | |
| // De-dupe after stripping prefixes. | |
| .filter((m, idx, arr) => arr.findIndex((x) => x.name === m.name) === idx) | |
| .sort((a, b) => a.name.localeCompare(b.name)); | |
| } else { | |
| modelsByProvider[providerId] = models; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/sync-llm-models.ts` around lines 395 - 404, In the amazon-bedrock
branch where modelsByProvider is set, the regex used to strip region prefixes in
the map callback is double-escaped (`/^(eu\\.|us\\.|global\\.)/i`) so it never
matches; update the pattern in that map (the function that returns `{ ...m,
name: m.name.replace(... ) }`) to use a proper regex that matches literal dots,
e.g. `/^(eu\.|us\.|global\.)/i`, then keep the existing de-duplication (filter)
and sort logic unchanged so region-prefixed IDs like "eu.anthropic..." normalize
to "anthropic..." correctly.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsx (1)
296-318:⚠️ Potential issue | 🟠 MajorGuard provider model lookup before iterating to avoid overlay load failures.
At Line 318,
providerModelsis assumed iterable. IfgetSupportedProviders()includes a provider missing fromallModels, this throws and aborts the whole model list load.Proposed fix
const providerModels = allModels[provider]; + if (!providerModels || providerModels.length === 0) { + agent.logger.debug(`No registry models for provider: ${provider}`); + continue; + } for (const model of providerModels) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsx` around lines 296 - 318, When iterating providers in ModelSelectorRefactored (the for (const provider of providers) loop), guard the lookup of providerModels = allModels[provider] before iterating: check that providerModels is defined and iterable (e.g., Array.isArray(providerModels) or providerModels && typeof providerModels[Symbol.iterator] === 'function') and skip this provider if not; this prevents an exception when getSupportedProviders() returns a provider missing from allModels and ensures the rest of the model list still loads.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In
`@packages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsx`:
- Around line 296-318: When iterating providers in ModelSelectorRefactored (the
for (const provider of providers) loop), guard the lookup of providerModels =
allModels[provider] before iterating: check that providerModels is defined and
iterable (e.g., Array.isArray(providerModels) or providerModels && typeof
providerModels[Symbol.iterator] === 'function') and skip this provider if not;
this prevents an exception when getSupportedProviders() returns a provider
missing from allModels and ensures the rest of the model list still loads.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
AGENTS.mddocs/static/openapi/openapi.jsonpackage.jsonpackages/cli/src/cli/ink-cli/components/overlays/ModelSelectorRefactored.tsxpackages/core/src/llm/executor/stream-processor.tspackages/server/src/hono/index.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- AGENTS.md
- packages/core/src/llm/executor/stream-processor.ts
- packages/server/src/hono/index.ts
- package.json
Release Note
pnpm changeset(select packages + bump)Summary by CodeRabbit
New Features
connectCLI command.Documentation
Tests