feat: add MiniMax as LLM provider with M2.7 and M2.7-highspeed models#1937
feat: add MiniMax as LLM provider with M2.7 and M2.7-highspeed models#1937octo-patch wants to merge 4 commits intoAndyMik90:developfrom
Conversation
Add MiniMax M2.5 models as a first-class provider using the OpenAI-compatible adapter (@ai-sdk/openai-compatible), following the same pattern as the existing Z.AI integration. Changes: - Register 'minimax' in SupportedProvider enum and BuiltinProvider type - Add createOpenAICompatible cases in both factory.ts and registry.ts - Add 'MiniMax-' prefix to MODEL_PROVIDER_MAP for auto-detection - Add MiniMax-M2.5 and MiniMax-M2.5-highspeed to model catalog - Add provider presets (auto/balanced/quick) and model equivalences - Add PROVIDER_REGISTRY entry for settings UI - Add MiniMax detection test in factory.test.ts MiniMax API: https://api.minimax.io/v1 (OpenAI-compatible) Models: MiniMax-M2.5 (204K context), MiniMax-M2.5-highspeed (204K) Env var: MINIMAX_API_KEY
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates MiniMax as a new first-class Large Language Model (LLM) provider into the application. This enhancement allows users to leverage MiniMax M2.5 models, expanding the range of available AI capabilities within the multi-provider architecture. The changes ensure seamless detection, configuration, and utilization of MiniMax models alongside existing providers. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
📝 WalkthroughWalkthroughAdds MiniMax provider support: new provider enum/type entries, auth/settings keys, provider registry metadata, factory/registry creation for an OpenAI-compatible MiniMax client, many MiniMax models/presets/equivalences, a model-prefix mapping, resolver mapping, and unit tests for prefix detection. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Registry
participant Factory
participant SDK as "OpenAI-compatible SDK"
participant MinimaxAPI as "minimax API"
Client->>Registry: request provider for modelId "minimax-m2.5"
Registry->>Factory: determine provider = "minimax" and create instance
Factory->>SDK: createOpenAICompatible(name: "minimax", apiKey, baseURL)
SDK->>MinimaxAPI: send requests to https://api.minimax.io/v1
MinimaxAPI-->>SDK: responses
SDK-->>Factory: return provider client instance
Factory-->>Registry: provider instance
Registry-->>Client: provider ready
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can disable poems in the walkthrough.Disable the |
There was a problem hiding this comment.
🎉 Thanks for your first PR!
A maintainer will review it soon. Please make sure:
- Your branch is synced with
develop - CI checks pass
- You've followed our contribution guide
Welcome to the Auto Claude community!
There was a problem hiding this comment.
Code Review
This pull request successfully adds MiniMax as a new LLM provider, integrating it across the application from provider registration to model configuration. The changes are well-structured and follow existing patterns. My feedback focuses on improving consistency and maintainability, specifically regarding naming conventions and code duplication.
| 'llama-': 'groq', | ||
| 'grok-': 'xai', | ||
| 'glm-': 'zai', | ||
| 'MiniMax-': 'minimax', |
There was a problem hiding this comment.
For consistency with other provider prefixes in MODEL_PROVIDER_MAP (e.g., 'claude-', 'gpt-', 'llama-'), which are all lowercase, it's recommended to use a lowercase prefix for MiniMax. This improves maintainability and predictability.
This change will require corresponding updates to model names in apps/desktop/src/shared/constants/models.ts and tests in apps/desktop/src/main/ai/providers/__tests__/factory.test.ts.
| 'MiniMax-': 'minimax', | |
| 'minimax-': 'minimax', |
| it('detects MiniMax from MiniMax- prefix', () => { | ||
| expect(detectProviderFromModel('MiniMax-M2.5')).toBe('minimax'); | ||
| }); |
There was a problem hiding this comment.
To align with the suggested change of using a lowercase prefix for MiniMax models for consistency, this test should be updated to use a lowercase model name and test description.
| it('detects MiniMax from MiniMax- prefix', () => { | |
| expect(detectProviderFromModel('MiniMax-M2.5')).toBe('minimax'); | |
| }); | |
| it('detects MiniMax from minimax- prefix', () => { | |
| expect(detectProviderFromModel('minimax-m2.5')).toBe('minimax'); | |
| }); |
| case SupportedProvider.MiniMax: | ||
| return createOpenAICompatible({ | ||
| name: 'minimax', | ||
| apiKey, | ||
| baseURL: baseURL ?? 'https://api.minimax.io/v1', | ||
| headers, | ||
| }); |
There was a problem hiding this comment.
This logic for creating the MiniMax provider is duplicated in apps/desktop/src/main/ai/providers/registry.ts. To improve maintainability and avoid future inconsistencies, consider extracting this logic into a shared helper function. This pattern of duplication also exists for other createOpenAICompatible providers like ZAI.
| { value: 'MiniMax-M2.5', label: 'MiniMax M2.5', provider: 'minimax', description: 'Peak performance', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } }, | ||
| { value: 'MiniMax-M2.5-highspeed', label: 'MiniMax M2.5 High Speed', provider: 'minimax', description: 'Fast and agile', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } }, |
There was a problem hiding this comment.
To maintain consistency with other model identifiers in the project, which are typically lowercase, the value for these new MiniMax models should be in lowercase. This change should be propagated to all usages of these model IDs within this file.
| { value: 'MiniMax-M2.5', label: 'MiniMax M2.5', provider: 'minimax', description: 'Peak performance', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } }, | |
| { value: 'MiniMax-M2.5-highspeed', label: 'MiniMax M2.5 High Speed', provider: 'minimax', description: 'Fast and agile', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } }, | |
| { value: 'minimax-m2.5', label: 'MiniMax M2.5', provider: 'minimax', description: 'Peak performance', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } }, | |
| { value: 'minimax-m2.5-highspeed', label: 'MiniMax M2.5 High Speed', provider: 'minimax', description: 'Fast and agile', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } }, |
| auto: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | ||
| balanced: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, |
| auto: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | ||
| balanced: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | ||
| quick: { primaryModel: 'MiniMax-M2.5-highspeed', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5-highspeed', planning: 'MiniMax-M2.5-highspeed', coding: 'MiniMax-M2.5-highspeed', qa: 'MiniMax-M2.5-highspeed' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, |
There was a problem hiding this comment.
To align with the suggested change to lowercase model identifiers, the primaryModel and phaseModels in these presets should be updated.
| auto: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | |
| balanced: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | |
| quick: { primaryModel: 'MiniMax-M2.5-highspeed', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5-highspeed', planning: 'MiniMax-M2.5-highspeed', coding: 'MiniMax-M2.5-highspeed', qa: 'MiniMax-M2.5-highspeed' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | |
| auto: { primaryModel: 'minimax-m2.5', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5', planning: 'minimax-m2.5', coding: 'minimax-m2.5', qa: 'minimax-m2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | |
| balanced: { primaryModel: 'minimax-m2.5', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5', planning: 'minimax-m2.5', coding: 'minimax-m2.5', qa: 'minimax-m2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | |
| quick: { primaryModel: 'minimax-m2.5-highspeed', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5-highspeed', planning: 'minimax-m2.5-highspeed', coding: 'minimax-m2.5-highspeed', qa: 'minimax-m2.5-highspeed' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, |
| minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } }, | ||
| }, | ||
| 'MiniMax-M2.5': { | ||
| minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } }, | ||
| anthropic: { modelId: 'claude-sonnet-4-6', reasoning: { type: 'thinking_tokens', level: 'medium' } }, | ||
| openai: { modelId: 'gpt-5.2', reasoning: { type: 'reasoning_effort', level: 'medium' } }, | ||
| }, | ||
| 'MiniMax-M2.5-highspeed': { | ||
| minimax: { modelId: 'MiniMax-M2.5-highspeed', reasoning: { type: 'none' } }, | ||
| anthropic: { modelId: 'claude-haiku-4-5-20251001', reasoning: { type: 'none' } }, | ||
| openai: { modelId: 'gpt-5.1-codex-mini', reasoning: { type: 'reasoning_effort', level: 'low' } }, | ||
| }, |
There was a problem hiding this comment.
To align with the suggested change to lowercase model identifiers, the keys and modelId values in this equivalence mapping should be updated to use lowercase.
| minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } }, | |
| }, | |
| 'MiniMax-M2.5': { | |
| minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } }, | |
| anthropic: { modelId: 'claude-sonnet-4-6', reasoning: { type: 'thinking_tokens', level: 'medium' } }, | |
| openai: { modelId: 'gpt-5.2', reasoning: { type: 'reasoning_effort', level: 'medium' } }, | |
| }, | |
| 'MiniMax-M2.5-highspeed': { | |
| minimax: { modelId: 'MiniMax-M2.5-highspeed', reasoning: { type: 'none' } }, | |
| anthropic: { modelId: 'claude-haiku-4-5-20251001', reasoning: { type: 'none' } }, | |
| openai: { modelId: 'gpt-5.1-codex-mini', reasoning: { type: 'reasoning_effort', level: 'low' } }, | |
| }, | |
| minimax: { modelId: 'minimax-m2.5', reasoning: { type: 'none' } }, | |
| }, | |
| 'minimax-m2.5': { | |
| minimax: { modelId: 'minimax-m2.5', reasoning: { type: 'none' } }, | |
| anthropic: { modelId: 'claude-sonnet-4-6', reasoning: { type: 'thinking_tokens', level: 'medium' } }, | |
| openai: { modelId: 'gpt-5.2', reasoning: { type: 'reasoning_effort', level: 'medium' } }, | |
| }, | |
| 'minimax-m2.5-highspeed': { | |
| minimax: { modelId: 'minimax-m2.5-highspeed', reasoning: { type: 'none' } }, | |
| anthropic: { modelId: 'claude-haiku-4-5-20251001', reasoning: { type: 'none' } }, | |
| openai: { modelId: 'gpt-5.1-codex-mini', reasoning: { type: 'reasoning_effort', level: 'low' } }, | |
| }, |
| groq: { modelId: 'llama-3.3-70b-versatile', reasoning: { type: 'none' } }, | ||
| xai: { modelId: 'grok-3-mini', reasoning: { type: 'reasoning_effort', level: 'medium' } }, | ||
| zai: { modelId: 'glm-4.7', reasoning: { type: 'none' } }, | ||
| minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } }, |
There was a problem hiding this comment.
| mistral: { modelId: 'mistral-small-latest', reasoning: { type: 'none' } }, | ||
| groq: { modelId: 'llama-3.3-70b-versatile', reasoning: { type: 'none' } }, | ||
| zai: { modelId: 'glm-4.5-flash', reasoning: { type: 'none' } }, | ||
| minimax: { modelId: 'MiniMax-M2.5-highspeed', reasoning: { type: 'none' } }, |
There was a problem hiding this comment.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/desktop/src/main/ai/providers/__tests__/factory.test.ts`:
- Around line 166-168: The test only verifies
detectProviderFromModel('MiniMax-M2.5') yields 'minimax' but doesn't validate
the provider factory output; update the test to create the provider instance
using the same factory used elsewhere in this test file (i.e., call the module's
provider factory with the provider id returned by
detectProviderFromModel('MiniMax-M2.5')) and assert the resulting factory
produces the expected MiniMax-specific properties (e.g., provider.name ===
'minimax' and the default baseURL/value expected for MiniMax); ensure you import
and use the exact factory function from the file under test and add these
assertions so the branch behavior (name and baseURL) is validated despite the
openai-compatible mock.
In `@apps/desktop/src/main/ai/providers/types.ts`:
- Around line 23-24: SupportedProvider was extended with 'MiniMax' but the auth
lookup maps were not updated; add a minimax entry to the PROVIDER_ENV_VARS and
PROVIDER_SETTINGS_KEY records so TypeScript strict checks pass and stored
MiniMax keys resolve. Locate the constants PROVIDER_ENV_VARS and
PROVIDER_SETTINGS_KEY and add the corresponding minimax key/value entries
(matching the pattern used for other providers) and ensure the string keys align
with the new SupportedProvider 'MiniMax' value.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: ba0ee00d-9e6f-4124-bd14-9f809c161952
📒 Files selected for processing (8)
apps/desktop/src/main/ai/config/types.tsapps/desktop/src/main/ai/providers/__tests__/factory.test.tsapps/desktop/src/main/ai/providers/factory.tsapps/desktop/src/main/ai/providers/registry.tsapps/desktop/src/main/ai/providers/types.tsapps/desktop/src/shared/constants/models.tsapps/desktop/src/shared/constants/providers.tsapps/desktop/src/shared/types/provider-account.ts
| it('detects MiniMax from MiniMax- prefix', () => { | ||
| expect(detectProviderFromModel('MiniMax-M2.5')).toBe('minimax'); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add a MiniMax factory assertion here, not just prefix detection.
This only proves MODEL_PROVIDER_MAP contains MiniMax-. Because the @ai-sdk/openai-compatible mock returns the same stub for Z.AI, MiniMax, and Ollama, the suite would still pass if the MiniMax branch used the wrong name or default baseURL.
🧪 Suggested test hardening
+import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
import { createAnthropic } from '@ai-sdk/anthropic';
import { createProvider, detectProviderFromModel, createProviderFromModelId } from '../factory';
import { SupportedProvider } from '../types'; it('detects MiniMax from MiniMax- prefix', () => {
expect(detectProviderFromModel('MiniMax-M2.5')).toBe('minimax');
});
+
+ it('wires MiniMax through the OpenAI-compatible adapter', () => {
+ createProvider({
+ config: { provider: SupportedProvider.MiniMax, apiKey: 'test-key' },
+ modelId: 'MiniMax-M2.5',
+ });
+
+ expect(createOpenAICompatible).toHaveBeenCalledWith(
+ expect.objectContaining({
+ name: 'minimax',
+ apiKey: 'test-key',
+ baseURL: 'https://api.minimax.io/v1',
+ }),
+ );
+ });As per coding guidelines, apps/desktop/**/*.test.{ts,tsx}: Ensure tests are comprehensive and follow Vitest conventions. Check for proper mocking and test isolation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/ai/providers/__tests__/factory.test.ts` around lines
166 - 168, The test only verifies detectProviderFromModel('MiniMax-M2.5') yields
'minimax' but doesn't validate the provider factory output; update the test to
create the provider instance using the same factory used elsewhere in this test
file (i.e., call the module's provider factory with the provider id returned by
detectProviderFromModel('MiniMax-M2.5')) and assert the resulting factory
produces the expected MiniMax-specific properties (e.g., provider.name ===
'minimax' and the default baseURL/value expected for MiniMax); ensure you import
and use the exact factory function from the file under test and add these
assertions so the branch behavior (name and baseURL) is validated despite the
openai-compatible mock.
Address PR review feedback:
- Change all MiniMax model IDs from PascalCase to lowercase
('MiniMax-M2.5' → 'minimax-m2.5', 'MiniMax-M2.5-highspeed' →
'minimax-m2.5-highspeed') for consistency with other providers
- Update MODEL_PROVIDER_MAP prefix from 'MiniMax-' to 'minimax-'
- Add missing 'minimax' entries to PROVIDER_ENV_VARS and
PROVIDER_SETTINGS_KEY in auth/types.ts (critical for API key
resolution)
- Update all preset definitions and equivalence mappings
- Update factory detection test to use lowercase model name
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
|
Thanks for the thorough review! I've pushed a fix commit (c4057c0) addressing the feedback: Changes madeLowercase model IDs (gemini-code-assist items 1, 2, 4, 6):
Auth entries (coderabbitai critical item 8):
Acknowledged but not changedFactory duplication (gemini-code-assist item 3): The MiniMax provider creation in Identical auto/balanced presets (gemini-code-assist item 5): The Factory test assertion (coderabbitai nitpick item 7): The current prefix-detection test pattern is consistent with all other provider tests in the same describe block. A more thorough factory integration test could be a good follow-up but is out of scope for this PR. |
| export type BuiltinProvider = | ||
| | 'anthropic' | 'openai' | 'google' | 'amazon-bedrock' | 'azure' | ||
| | 'mistral' | 'groq' | 'xai' | 'openrouter' | 'zai' | ||
| | 'ollama' | 'openai-compatible'; | ||
| | 'minimax' | 'ollama' | 'openai-compatible'; | ||
|
|
||
| export type BillingModel = 'subscription' | 'pay-per-use'; | ||
|
|
There was a problem hiding this comment.
Bug: The BUILTIN_TO_SUPPORTED map in resolver.ts is missing an entry for 'minimax', causing MiniMax accounts to be silently skipped during queue-based provider routing.
Severity: CRITICAL
Suggested Fix
Add the missing 'minimax': 'minimax' entry to the BUILTIN_TO_SUPPORTED map located in apps/desktop/src/main/ai/auth/resolver.ts to ensure the provider is correctly resolved during queue-based routing.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: apps/desktop/src/shared/types/provider-account.ts#L7-L13
Potential issue: The new 'minimax' provider is added to the `BuiltinProvider` type, but
the `BUILTIN_TO_SUPPORTED` map in `resolver.ts` was not updated. The
`resolveAuthFromQueue()` function uses this map to route requests. When it encounters a
'minimax' account, the lookup `BUILTIN_TO_SUPPORTED['minimax']` returns `undefined`,
causing the function to silently `continue` and skip the account. This effectively
disables the MiniMax provider for any multi-provider queue-based routing, which is the
primary routing mechanism in the application, rendering the integration non-functional
for its main use case.
Did we get this right? 👍 / 👎 to inform future reviews.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/desktop/src/main/ai/auth/types.ts`:
- Line 100: The minimax -> 'globalMiniMaxApiKey' setting referenced in the
mapping is missing from the settings schema, causing resolveFromProfileApiKey()
(which reads PROVIDER_SETTINGS_KEY) to never find MiniMax keys; add the
corresponding field (globalMiniMaxApiKey) to the AppSettings definition in
apps/desktop/src/shared/types/settings.ts and/or type the settings map as keyof
AppSettings so the compiler prevents this drift, ensuring
resolveFromProfileApiKey() can read MiniMax keys via PROVIDER_SETTINGS_KEY.
In `@apps/desktop/src/shared/constants/models.ts`:
- Around line 425-430: The canonical equivalence for the full MiniMax ID
'minimax-m2.5' is ambiguous: update the 'minimax-m2.5' entry in the models map
so it exhaustively includes equivalent shorthand mappings for all providers
(e.g., add mappings for 'opus' and 'sonnet' like the other full-ID rows) OR
change the reverse lookup behavior in resolveModelEquivalent() to stop
reverse-fallback when a direct full-ID entry exists; specifically modify the
'minimax-m2.5' object in models.ts to include the same provider keys
(anthropic/opus/sonnet/openai/etc.) with their intended modelIds and reasoning,
or alter resolveModelEquivalent() to prefer direct full-ID matches and avoid
selecting the first shorthand by object order.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: b9cb27e4-374a-443f-97a8-1a1e61e2a962
📒 Files selected for processing (4)
apps/desktop/src/main/ai/auth/types.tsapps/desktop/src/main/ai/config/types.tsapps/desktop/src/main/ai/providers/__tests__/factory.test.tsapps/desktop/src/shared/constants/models.ts
- Add missing 'minimax' entry to BUILTIN_TO_SUPPORTED map in resolver.ts, fixing silent skip of MiniMax accounts during queue-based provider routing - Add globalMiniMaxApiKey (and globalZAIApiKey) to AppSettings interface, fixing auth settings lookup for profile-specific API keys - Differentiate model equivalence tiers: 'sonnet' tier now maps to minimax-m2.5-highspeed instead of minimax-m2.5 (which is reserved for 'opus' tier), matching the pattern used by other multi-model providers Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Pushed commit CRITICAL fixes
MAJOR fix
MEDIUM items
|
| minimax: { | ||
| auto: { primaryModel: 'minimax-m2.5', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5', planning: 'minimax-m2.5', coding: 'minimax-m2.5', qa: 'minimax-m2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | ||
| balanced: { primaryModel: 'minimax-m2.5', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5', planning: 'minimax-m2.5', coding: 'minimax-m2.5', qa: 'minimax-m2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | ||
| quick: { primaryModel: 'minimax-m2.5-highspeed', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5-highspeed', planning: 'minimax-m2.5-highspeed', coding: 'minimax-m2.5-highspeed', qa: 'minimax-m2.5-highspeed' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } }, | ||
| }, |
There was a problem hiding this comment.
Bug: The MiniMax provider is missing the complex preset, causing the 'Complex Tasks' profile to silently fall back to the 'Auto' profile's settings.
Severity: MEDIUM
Suggested Fix
Add a complex preset to the minimax provider object within PROVIDER_PRESET_DEFINITIONS. This new preset should likely use the same model as the auto preset, such as minimax-m2.5, to align with the pattern used by other providers with limited model variety.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: apps/desktop/src/shared/constants/models.ts#L322-L326
Potential issue: The MiniMax provider definition in `PROVIDER_PRESET_DEFINITIONS` is
missing a `complex` preset. When a user selects the 'Complex Tasks' profile, which has
an ID of 'complex', the `getProviderPresetOrFallback` function fails to find an exact
match for `minimax.complex`. The function then silently falls back to the `minimax.auto`
preset. This results in the 'Complex Tasks' profile behaving identically to the 'Auto'
profile for the MiniMax provider, ignoring the user's selection without any warning or
indication.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/desktop/src/shared/types/settings.ts`:
- Around line 287-288: DEFAULT_APP_SETTINGS is missing initializers for the
newly added optional fields globalZAIApiKey and globalMiniMaxApiKey; update the
DEFAULT_APP_SETTINGS object (same area that currently sets globalOpenAIApiKey)
to include explicit initializers for these two keys (set them to undefined with
the same typed form used elsewhere, e.g., as string | undefined) so they are
consistently present during resets and first-run initialization.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4fc29a66-d743-4299-8dcb-070280d8a267
📒 Files selected for processing (3)
apps/desktop/src/main/ai/auth/resolver.tsapps/desktop/src/shared/constants/models.tsapps/desktop/src/shared/types/settings.ts
| globalZAIApiKey?: string; | ||
| globalMiniMaxApiKey?: string; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add new API key fields to DEFAULT_APP_SETTINGS for consistency.
The new globalZAIApiKey and globalMiniMaxApiKey fields are correctly added as optional. However, DEFAULT_APP_SETTINGS in apps/desktop/src/shared/constants/config.ts (around line 51) initializes globalOpenAIApiKey but not these new keys. For consistency with other global API key fields, add explicit undefined initializers:
// In apps/desktop/src/shared/constants/config.ts DEFAULT_APP_SETTINGS:
globalZAIApiKey: undefined as string | undefined,
globalMiniMaxApiKey: undefined as string | undefined,This ensures consistent initialization across settings resets and first-launch scenarios.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/shared/types/settings.ts` around lines 287 - 288,
DEFAULT_APP_SETTINGS is missing initializers for the newly added optional fields
globalZAIApiKey and globalMiniMaxApiKey; update the DEFAULT_APP_SETTINGS object
(same area that currently sets globalOpenAIApiKey) to include explicit
initializers for these two keys (set them to undefined with the same typed form
used elsewhere, e.g., as string | undefined) so they are consistently present
during resets and first-run initialization.
- Add MiniMax-M2.7-highspeed to model catalog for low-latency scenarios - Upgrade quick preset from M2.5-highspeed to M2.7-highspeed - Add M2.7-highspeed model equivalence entry - Update haiku tier to map to M2.7-highspeed - Update provider detection test for M2.7-highspeed
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
apps/desktop/src/main/ai/providers/__tests__/factory.test.ts (1)
166-170: 🧹 Nitpick | 🔵 TrivialHarden MiniMax coverage with a factory-path assertion.
Line 166-Line 170 verifies prefix detection only. With the shared
@ai-sdk/openai-compatiblemock, the suite can still pass even if MiniMax is wired with an incorrectnameor defaultbaseURL.🧪 Suggested test hardening
+import { createOpenAICompatible } from '@ai-sdk/openai-compatible'; import { createAnthropic } from '@ai-sdk/anthropic'; import { createProvider, detectProviderFromModel, createProviderFromModelId } from '../factory'; import { SupportedProvider } from '../types';it('detects MiniMax from minimax- prefix', () => { expect(detectProviderFromModel('minimax-m2.7')).toBe('minimax'); expect(detectProviderFromModel('minimax-m2.7-highspeed')).toBe('minimax'); expect(detectProviderFromModel('minimax-m2.5')).toBe('minimax'); }); + + it('wires MiniMax through the OpenAI-compatible adapter', () => { + createProvider({ + config: { provider: SupportedProvider.MiniMax, apiKey: 'test-key' }, + modelId: 'minimax-m2.7', + }); + + expect(createOpenAICompatible).toHaveBeenCalledWith( + expect.objectContaining({ + name: 'minimax', + apiKey: 'test-key', + baseURL: 'https://api.minimax.io/v1', + }), + ); + });As per coding guidelines,
apps/desktop/**/*.test.{ts,tsx}: Ensure tests are comprehensive and follow Vitest conventions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/ai/providers/__tests__/factory.test.ts` around lines 166 - 170, The test only checks prefix detection via detectProviderFromModel('minimax-...') but must also assert the actual provider factory wiring; update the test to call the provider factory resolver (e.g., createProviderFromModel or getProviderByModel) for 'minimax-m2.7' and assert the returned provider metadata contains the expected factory/name/baseURL values for MiniMax (for example provider.name === 'minimax' and provider.baseURL matches the MiniMax factory URL or provider.factoryPath matches the MiniMax factory module) so the test fails if MiniMax is wired with the wrong implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/desktop/src/shared/constants/providers.ts`:
- Around line 65-69: Update the provider entry with id 'minimax' by editing its
description field to include the missing "minimax-m2.5-highspeed" model name so
the text lists "MiniMax M2.7, M2.7-highspeed, minimax-m2.5-highspeed and M2.5"
(or otherwise include that model) to match the model catalog; locate the object
with id: 'minimax' and adjust the description string accordingly.
---
Duplicate comments:
In `@apps/desktop/src/main/ai/providers/__tests__/factory.test.ts`:
- Around line 166-170: The test only checks prefix detection via
detectProviderFromModel('minimax-...') but must also assert the actual provider
factory wiring; update the test to call the provider factory resolver (e.g.,
createProviderFromModel or getProviderByModel) for 'minimax-m2.7' and assert the
returned provider metadata contains the expected factory/name/baseURL values for
MiniMax (for example provider.name === 'minimax' and provider.baseURL matches
the MiniMax factory URL or provider.factoryPath matches the MiniMax factory
module) so the test fails if MiniMax is wired with the wrong implementation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 67b698a7-9d5c-4a88-903c-20851ea39a0e
📒 Files selected for processing (3)
apps/desktop/src/main/ai/providers/__tests__/factory.test.tsapps/desktop/src/shared/constants/models.tsapps/desktop/src/shared/constants/providers.ts
| id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed and M2.5 models', | ||
| category: 'popular', | ||
| authMethods: ['api-key'], envVars: ['MINIMAX_API_KEY'], | ||
| configFields: ['baseUrl'], website: 'https://platform.minimax.io', | ||
| }, |
There was a problem hiding this comment.
Provider description omits minimax-m2.5-highspeed.
Line 65 is missing one currently supported MiniMax model, which makes the UI description inconsistent with the model catalog.
✏️ Suggested text fix
- id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed and M2.5 models',
+ id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed, M2.5, and M2.5-highspeed models',📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed and M2.5 models', | |
| category: 'popular', | |
| authMethods: ['api-key'], envVars: ['MINIMAX_API_KEY'], | |
| configFields: ['baseUrl'], website: 'https://platform.minimax.io', | |
| }, | |
| id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed, M2.5, and M2.5-highspeed models', | |
| category: 'popular', | |
| authMethods: ['api-key'], envVars: ['MINIMAX_API_KEY'], | |
| configFields: ['baseUrl'], website: 'https://platform.minimax.io', | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/shared/constants/providers.ts` around lines 65 - 69, Update
the provider entry with id 'minimax' by editing its description field to include
the missing "minimax-m2.5-highspeed" model name so the text lists "MiniMax M2.7,
M2.7-highspeed, minimax-m2.5-highspeed and M2.5" (or otherwise include that
model) to match the model catalog; locate the object with id: 'minimax' and
adjust the description string accordingly.
Summary
Changes
Why
MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities. M2.7-highspeed provides a low-latency option for quick iterations.
Testing
Summary by CodeRabbit
New Features
Tests