Skip to content

feat: add MiniMax as LLM provider with M2.7 and M2.7-highspeed models#1937

Open
octo-patch wants to merge 4 commits intoAndyMik90:developfrom
octo-patch:feat/add-minimax-provider
Open

feat: add MiniMax as LLM provider with M2.7 and M2.7-highspeed models#1937
octo-patch wants to merge 4 commits intoAndyMik90:developfrom
octo-patch:feat/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 12, 2026

Summary

  • Add MiniMax as a first-class LLM provider using OpenAI-compatible adapter
  • Include MiniMax-M2.7 (flagship) and MiniMax-M2.7-highspeed (low-latency) models
  • Set MiniMax-M2.7 as default model with M2.7-highspeed for quick preset
  • Retain M2.5 and M2.5-highspeed as available alternatives

Changes

  • Register MiniMax provider in SupportedProvider enum, factory, and registry
  • Add 4 models to catalog: M2.7, M2.7-highspeed, M2.5, M2.5-highspeed
  • Add provider presets (auto/balanced/quick) with appropriate model mapping
  • Add model equivalence entries for cross-provider resolution
  • Add MiniMax detection test in factory.test.ts
  • Add PROVIDER_REGISTRY entry for settings UI

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities. M2.7-highspeed provides a low-latency option for quick iterations.

Testing

  • Unit tests updated and passing (27/27)
  • Provider detection covers M2.7, M2.7-highspeed, and M2.5 models

Summary by CodeRabbit

  • New Features

    • Added MiniMax provider with models MiniMax M2.7, M2.7 High Speed, M2.5, and M2.5 High Speed.
    • Introduced MiniMax presets (auto, balanced, quick) and comprehensive model equivalence mappings for integration with existing providers.
    • Added credential and configuration support for MiniMax (API key, base URL) and listed MiniMax in the provider registry.
  • Tests

    • Added a test to verify MiniMax model ID detection.

Add MiniMax M2.5 models as a first-class provider using the
OpenAI-compatible adapter (@ai-sdk/openai-compatible), following the
same pattern as the existing Z.AI integration.

Changes:
- Register 'minimax' in SupportedProvider enum and BuiltinProvider type
- Add createOpenAICompatible cases in both factory.ts and registry.ts
- Add 'MiniMax-' prefix to MODEL_PROVIDER_MAP for auto-detection
- Add MiniMax-M2.5 and MiniMax-M2.5-highspeed to model catalog
- Add provider presets (auto/balanced/quick) and model equivalences
- Add PROVIDER_REGISTRY entry for settings UI
- Add MiniMax detection test in factory.test.ts

MiniMax API: https://api.minimax.io/v1 (OpenAI-compatible)
Models: MiniMax-M2.5 (204K context), MiniMax-M2.5-highspeed (204K)
Env var: MINIMAX_API_KEY
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates MiniMax as a new first-class Large Language Model (LLM) provider into the application. This enhancement allows users to leverage MiniMax M2.5 models, expanding the range of available AI capabilities within the multi-provider architecture. The changes ensure seamless detection, configuration, and utilization of MiniMax models alongside existing providers.

Highlights

  • Provider Registration: Added 'minimax' to the SupportedProvider enum, BuiltinProvider type, and PROVIDER_REGISTRY to formally integrate MiniMax into the system.
  • Factory & Registry Integration: Implemented createOpenAICompatible cases in both factory.ts and registry.ts for MiniMax, following the established pattern for other providers like Z.AI.
  • Model Detection: Included the 'MiniMax-' prefix in MODEL_PROVIDER_MAP to enable automatic detection of MiniMax models.
  • Model Catalog Expansion: Added MiniMax-M2.5 (peak performance) and MiniMax-M2.5-highspeed (fast) models, both with a 204K context window, to ALL_AVAILABLE_MODELS.
  • Presets & Equivalences: Defined provider presets (auto/balanced/quick) and cross-provider model equivalences for MiniMax models, ensuring compatibility with existing shorthands (opus, sonnet, haiku).
  • Test Coverage: Introduced a new test in factory.test.ts specifically for MiniMax prefix detection, enhancing overall test coverage.
Changelog
  • apps/desktop/src/main/ai/config/types.ts
    • Added 'MiniMax-' prefix to the MODEL_PROVIDER_MAP for automatic provider detection.
  • apps/desktop/src/main/ai/providers/tests/factory.test.ts
    • Added a test case to verify MiniMax model prefix detection.
  • apps/desktop/src/main/ai/providers/factory.ts
    • Implemented a new case for SupportedProvider.MiniMax to create an OpenAI-compatible provider instance.
  • apps/desktop/src/main/ai/providers/registry.ts
    • Included a new case for SupportedProvider.MiniMax to create an OpenAI-compatible SDK instance.
  • apps/desktop/src/main/ai/providers/types.ts
    • Appended MiniMax to the SupportedProvider enum.
  • apps/desktop/src/shared/constants/models.ts
    • Registered MiniMax M2.5 models, defined provider presets, and established cross-provider model equivalences.
  • apps/desktop/src/shared/constants/providers.ts
    • Added MiniMax to the PROVIDER_REGISTRY with its details, including API base URL and authentication methods.
  • apps/desktop/src/shared/types/provider-account.ts
    • Extended the BuiltinProvider type to include minimax.
Activity
  • Confirmed that the implementation follows the existing Z.AI/Ollama createOpenAICompatible pattern.
  • Verified that exhaustive switch statements in factory.ts and registry.ts are satisfied with TypeScript's never guard.
  • Added a dedicated test for MiniMax prefix detection.
  • Ensured that all existing it.each(allProviders) tests will automatically cover the new MiniMax provider.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 12, 2026

📝 Walkthrough

Walkthrough

Adds MiniMax provider support: new provider enum/type entries, auth/settings keys, provider registry metadata, factory/registry creation for an OpenAI-compatible MiniMax client, many MiniMax models/presets/equivalences, a model-prefix mapping, resolver mapping, and unit tests for prefix detection.

Changes

Cohort / File(s) Summary
Config & Prefix Mapping
apps/desktop/src/main/ai/config/types.ts
Added "minimax-": "minimax" to model provider prefix mapping.
Provider Types & Accounts
apps/desktop/src/main/ai/providers/types.ts, apps/desktop/src/shared/types/provider-account.ts
Added MiniMax / 'minimax' to SupportedProvider and included 'minimax' in BuiltinProvider union.
Factory & Registry
apps/desktop/src/main/ai/providers/factory.ts, apps/desktop/src/main/ai/providers/registry.ts
Added MiniMax branch that constructs an OpenAI-compatible client via createOpenAICompatible(name: 'minimax', apiKey, baseURL, headers) with default baseURL https://api.minimax.io/v1.
Models, Presets & Equivalences
apps/desktop/src/shared/constants/models.ts
Added four MiniMax models (minimax-m2.7, minimax-m2.7-highspeed, minimax-m2.5, minimax-m2.5-highspeed) to ALL_AVAILABLE_MODELS; added minimax provider presets; extended DEFAULT_MODEL_EQUIVALENCES with new minimax groups and per-provider mappings.
Provider Registry Metadata
apps/desktop/src/shared/constants/providers.ts
Appended minimax entry in PROVIDER_REGISTRY with id, name, description, category, auth/env var info, baseUrl config, and website.
Auth Types & Resolver
apps/desktop/src/main/ai/auth/types.ts, apps/desktop/src/main/ai/auth/resolver.ts
Added PROVIDER_ENV_VARS and PROVIDER_SETTINGS_KEY mappings for minimax (MINIMAX_API_KEY, globalMiniMaxApiKey); mapped builtin minimax to supported minimax in resolver.
App Settings
apps/desktop/src/shared/types/settings.ts
Added optional globalZAIApiKey?: string and globalMiniMaxApiKey?: string to AppSettings.
Tests
apps/desktop/src/main/ai/providers/__tests__/factory.test.ts
Added unit tests asserting model IDs prefixed with minimax- resolve to the minimax provider.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant Registry
  participant Factory
  participant SDK as "OpenAI-compatible SDK"
  participant MinimaxAPI as "minimax API"

  Client->>Registry: request provider for modelId "minimax-m2.5"
  Registry->>Factory: determine provider = "minimax" and create instance
  Factory->>SDK: createOpenAICompatible(name: "minimax", apiKey, baseURL)
  SDK->>MinimaxAPI: send requests to https://api.minimax.io/v1
  MinimaxAPI-->>SDK: responses
  SDK-->>Factory: return provider client instance
  Factory-->>Registry: provider instance
  Registry-->>Client: provider ready
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐇 I hopped in with a twitch and a grin,
MiniMax models now join the din,
From prefix map to API flow,
I bounced through code where new things grow,
A tiny cheer—more models to win!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately describes the main change: adding MiniMax as an LLM provider with M2.7 and M2.7-highspeed models, which aligns with the core changeset across all modified files.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can disable poems in the walkthrough.

Disable the reviews.poem setting to disable the poems in the walkthrough.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉 Thanks for your first PR!

A maintainer will review it soon. Please make sure:

  • Your branch is synced with develop
  • CI checks pass
  • You've followed our contribution guide

Welcome to the Auto Claude community!

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully adds MiniMax as a new LLM provider, integrating it across the application from provider registration to model configuration. The changes are well-structured and follow existing patterns. My feedback focuses on improving consistency and maintainability, specifically regarding naming conventions and code duplication.

'llama-': 'groq',
'grok-': 'xai',
'glm-': 'zai',
'MiniMax-': 'minimax',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with other provider prefixes in MODEL_PROVIDER_MAP (e.g., 'claude-', 'gpt-', 'llama-'), which are all lowercase, it's recommended to use a lowercase prefix for MiniMax. This improves maintainability and predictability.

This change will require corresponding updates to model names in apps/desktop/src/shared/constants/models.ts and tests in apps/desktop/src/main/ai/providers/__tests__/factory.test.ts.

Suggested change
'MiniMax-': 'minimax',
'minimax-': 'minimax',

Comment on lines +166 to +168
it('detects MiniMax from MiniMax- prefix', () => {
expect(detectProviderFromModel('MiniMax-M2.5')).toBe('minimax');
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To align with the suggested change of using a lowercase prefix for MiniMax models for consistency, this test should be updated to use a lowercase model name and test description.

Suggested change
it('detects MiniMax from MiniMax- prefix', () => {
expect(detectProviderFromModel('MiniMax-M2.5')).toBe('minimax');
});
it('detects MiniMax from minimax- prefix', () => {
expect(detectProviderFromModel('minimax-m2.5')).toBe('minimax');
});

Comment on lines +144 to +150
case SupportedProvider.MiniMax:
return createOpenAICompatible({
name: 'minimax',
apiKey,
baseURL: baseURL ?? 'https://api.minimax.io/v1',
headers,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for creating the MiniMax provider is duplicated in apps/desktop/src/main/ai/providers/registry.ts. To improve maintainability and avoid future inconsistencies, consider extracting this logic into a shared helper function. This pattern of duplication also exists for other createOpenAICompatible providers like ZAI.

Comment on lines +70 to +71
{ value: 'MiniMax-M2.5', label: 'MiniMax M2.5', provider: 'minimax', description: 'Peak performance', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } },
{ value: 'MiniMax-M2.5-highspeed', label: 'MiniMax M2.5 High Speed', provider: 'minimax', description: 'Fast and agile', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To maintain consistency with other model identifiers in the project, which are typically lowercase, the value for these new MiniMax models should be in lowercase. This change should be propagated to all usages of these model IDs within this file.

Suggested change
{ value: 'MiniMax-M2.5', label: 'MiniMax M2.5', provider: 'minimax', description: 'Peak performance', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } },
{ value: 'MiniMax-M2.5-highspeed', label: 'MiniMax M2.5 High Speed', provider: 'minimax', description: 'Fast and agile', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } },
{ value: 'minimax-m2.5', label: 'MiniMax M2.5', provider: 'minimax', description: 'Peak performance', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } },
{ value: 'minimax-m2.5-highspeed', label: 'MiniMax M2.5 High Speed', provider: 'minimax', description: 'Fast and agile', capabilities: { thinking: false, tools: true, vision: false, contextWindow: 204800 } },

Comment on lines +323 to +324
auto: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
balanced: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The auto and balanced presets are identical. While this pattern exists for the mistral provider, it introduces code duplication. To improve maintainability, you could define this preset configuration in a constant and reference it for both auto and balanced.

Comment on lines +323 to +325
auto: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
balanced: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
quick: { primaryModel: 'MiniMax-M2.5-highspeed', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5-highspeed', planning: 'MiniMax-M2.5-highspeed', coding: 'MiniMax-M2.5-highspeed', qa: 'MiniMax-M2.5-highspeed' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To align with the suggested change to lowercase model identifiers, the primaryModel and phaseModels in these presets should be updated.

Suggested change
auto: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
balanced: { primaryModel: 'MiniMax-M2.5', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5', planning: 'MiniMax-M2.5', coding: 'MiniMax-M2.5', qa: 'MiniMax-M2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
quick: { primaryModel: 'MiniMax-M2.5-highspeed', primaryThinking: 'low', phaseModels: { spec: 'MiniMax-M2.5-highspeed', planning: 'MiniMax-M2.5-highspeed', coding: 'MiniMax-M2.5-highspeed', qa: 'MiniMax-M2.5-highspeed' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
auto: { primaryModel: 'minimax-m2.5', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5', planning: 'minimax-m2.5', coding: 'minimax-m2.5', qa: 'minimax-m2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
balanced: { primaryModel: 'minimax-m2.5', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5', planning: 'minimax-m2.5', coding: 'minimax-m2.5', qa: 'minimax-m2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
quick: { primaryModel: 'minimax-m2.5-highspeed', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5-highspeed', planning: 'minimax-m2.5-highspeed', coding: 'minimax-m2.5-highspeed', qa: 'minimax-m2.5-highspeed' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },

Comment on lines 425 to 436
minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } },
},
'MiniMax-M2.5': {
minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } },
anthropic: { modelId: 'claude-sonnet-4-6', reasoning: { type: 'thinking_tokens', level: 'medium' } },
openai: { modelId: 'gpt-5.2', reasoning: { type: 'reasoning_effort', level: 'medium' } },
},
'MiniMax-M2.5-highspeed': {
minimax: { modelId: 'MiniMax-M2.5-highspeed', reasoning: { type: 'none' } },
anthropic: { modelId: 'claude-haiku-4-5-20251001', reasoning: { type: 'none' } },
openai: { modelId: 'gpt-5.1-codex-mini', reasoning: { type: 'reasoning_effort', level: 'low' } },
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To align with the suggested change to lowercase model identifiers, the keys and modelId values in this equivalence mapping should be updated to use lowercase.

Suggested change
minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } },
},
'MiniMax-M2.5': {
minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } },
anthropic: { modelId: 'claude-sonnet-4-6', reasoning: { type: 'thinking_tokens', level: 'medium' } },
openai: { modelId: 'gpt-5.2', reasoning: { type: 'reasoning_effort', level: 'medium' } },
},
'MiniMax-M2.5-highspeed': {
minimax: { modelId: 'MiniMax-M2.5-highspeed', reasoning: { type: 'none' } },
anthropic: { modelId: 'claude-haiku-4-5-20251001', reasoning: { type: 'none' } },
openai: { modelId: 'gpt-5.1-codex-mini', reasoning: { type: 'reasoning_effort', level: 'low' } },
},
minimax: { modelId: 'minimax-m2.5', reasoning: { type: 'none' } },
},
'minimax-m2.5': {
minimax: { modelId: 'minimax-m2.5', reasoning: { type: 'none' } },
anthropic: { modelId: 'claude-sonnet-4-6', reasoning: { type: 'thinking_tokens', level: 'medium' } },
openai: { modelId: 'gpt-5.2', reasoning: { type: 'reasoning_effort', level: 'medium' } },
},
'minimax-m2.5-highspeed': {
minimax: { modelId: 'minimax-m2.5-highspeed', reasoning: { type: 'none' } },
anthropic: { modelId: 'claude-haiku-4-5-20251001', reasoning: { type: 'none' } },
openai: { modelId: 'gpt-5.1-codex-mini', reasoning: { type: 'reasoning_effort', level: 'low' } },
},

groq: { modelId: 'llama-3.3-70b-versatile', reasoning: { type: 'none' } },
xai: { modelId: 'grok-3-mini', reasoning: { type: 'reasoning_effort', level: 'medium' } },
zai: { modelId: 'glm-4.7', reasoning: { type: 'none' } },
minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To align with the suggested change to lowercase model identifiers, this model ID should be updated to use lowercase.

Suggested change
minimax: { modelId: 'MiniMax-M2.5', reasoning: { type: 'none' } },
minimax: { modelId: 'minimax-m2.5', reasoning: { type: 'none' } },

mistral: { modelId: 'mistral-small-latest', reasoning: { type: 'none' } },
groq: { modelId: 'llama-3.3-70b-versatile', reasoning: { type: 'none' } },
zai: { modelId: 'glm-4.5-flash', reasoning: { type: 'none' } },
minimax: { modelId: 'MiniMax-M2.5-highspeed', reasoning: { type: 'none' } },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To align with the suggested change to lowercase model identifiers, this model ID should be updated to use lowercase.

Suggested change
minimax: { modelId: 'MiniMax-M2.5-highspeed', reasoning: { type: 'none' } },
minimax: { modelId: 'minimax-m2.5-highspeed', reasoning: { type: 'none' } },

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/desktop/src/main/ai/providers/__tests__/factory.test.ts`:
- Around line 166-168: The test only verifies
detectProviderFromModel('MiniMax-M2.5') yields 'minimax' but doesn't validate
the provider factory output; update the test to create the provider instance
using the same factory used elsewhere in this test file (i.e., call the module's
provider factory with the provider id returned by
detectProviderFromModel('MiniMax-M2.5')) and assert the resulting factory
produces the expected MiniMax-specific properties (e.g., provider.name ===
'minimax' and the default baseURL/value expected for MiniMax); ensure you import
and use the exact factory function from the file under test and add these
assertions so the branch behavior (name and baseURL) is validated despite the
openai-compatible mock.

In `@apps/desktop/src/main/ai/providers/types.ts`:
- Around line 23-24: SupportedProvider was extended with 'MiniMax' but the auth
lookup maps were not updated; add a minimax entry to the PROVIDER_ENV_VARS and
PROVIDER_SETTINGS_KEY records so TypeScript strict checks pass and stored
MiniMax keys resolve. Locate the constants PROVIDER_ENV_VARS and
PROVIDER_SETTINGS_KEY and add the corresponding minimax key/value entries
(matching the pattern used for other providers) and ensure the string keys align
with the new SupportedProvider 'MiniMax' value.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: ba0ee00d-9e6f-4124-bd14-9f809c161952

📥 Commits

Reviewing files that changed from the base of the PR and between 04e68e3 and e68b7fb.

📒 Files selected for processing (8)
  • apps/desktop/src/main/ai/config/types.ts
  • apps/desktop/src/main/ai/providers/__tests__/factory.test.ts
  • apps/desktop/src/main/ai/providers/factory.ts
  • apps/desktop/src/main/ai/providers/registry.ts
  • apps/desktop/src/main/ai/providers/types.ts
  • apps/desktop/src/shared/constants/models.ts
  • apps/desktop/src/shared/constants/providers.ts
  • apps/desktop/src/shared/types/provider-account.ts

Comment on lines +166 to +168
it('detects MiniMax from MiniMax- prefix', () => {
expect(detectProviderFromModel('MiniMax-M2.5')).toBe('minimax');
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add a MiniMax factory assertion here, not just prefix detection.

This only proves MODEL_PROVIDER_MAP contains MiniMax-. Because the @ai-sdk/openai-compatible mock returns the same stub for Z.AI, MiniMax, and Ollama, the suite would still pass if the MiniMax branch used the wrong name or default baseURL.

🧪 Suggested test hardening
+import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
 import { createAnthropic } from '@ai-sdk/anthropic';
 import { createProvider, detectProviderFromModel, createProviderFromModelId } from '../factory';
 import { SupportedProvider } from '../types';
   it('detects MiniMax from MiniMax- prefix', () => {
     expect(detectProviderFromModel('MiniMax-M2.5')).toBe('minimax');
   });
+
+  it('wires MiniMax through the OpenAI-compatible adapter', () => {
+    createProvider({
+      config: { provider: SupportedProvider.MiniMax, apiKey: 'test-key' },
+      modelId: 'MiniMax-M2.5',
+    });
+
+    expect(createOpenAICompatible).toHaveBeenCalledWith(
+      expect.objectContaining({
+        name: 'minimax',
+        apiKey: 'test-key',
+        baseURL: 'https://api.minimax.io/v1',
+      }),
+    );
+  });

As per coding guidelines, apps/desktop/**/*.test.{ts,tsx}: Ensure tests are comprehensive and follow Vitest conventions. Check for proper mocking and test isolation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/desktop/src/main/ai/providers/__tests__/factory.test.ts` around lines
166 - 168, The test only verifies detectProviderFromModel('MiniMax-M2.5') yields
'minimax' but doesn't validate the provider factory output; update the test to
create the provider instance using the same factory used elsewhere in this test
file (i.e., call the module's provider factory with the provider id returned by
detectProviderFromModel('MiniMax-M2.5')) and assert the resulting factory
produces the expected MiniMax-specific properties (e.g., provider.name ===
'minimax' and the default baseURL/value expected for MiniMax); ensure you import
and use the exact factory function from the file under test and add these
assertions so the branch behavior (name and baseURL) is validated despite the
openai-compatible mock.

Address PR review feedback:
- Change all MiniMax model IDs from PascalCase to lowercase
  ('MiniMax-M2.5' → 'minimax-m2.5', 'MiniMax-M2.5-highspeed' →
  'minimax-m2.5-highspeed') for consistency with other providers
- Update MODEL_PROVIDER_MAP prefix from 'MiniMax-' to 'minimax-'
- Add missing 'minimax' entries to PROVIDER_ENV_VARS and
  PROVIDER_SETTINGS_KEY in auth/types.ts (critical for API key
  resolution)
- Update all preset definitions and equivalence mappings
- Update factory detection test to use lowercase model name

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@CLAassistant
Copy link

CLAassistant commented Mar 12, 2026

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
0 out of 2 committers have signed the CLA.

❌ PR Bot
❌ octo-patch


PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@octo-patch
Copy link
Author

Thanks for the thorough review! I've pushed a fix commit (c4057c0) addressing the feedback:

Changes made

Lowercase model IDs (gemini-code-assist items 1, 2, 4, 6):

  • All MiniMax model values changed from PascalCase to lowercase: MiniMax-M2.5minimax-m2.5, MiniMax-M2.5-highspeedminimax-m2.5-highspeed
  • MODEL_PROVIDER_MAP prefix updated from 'MiniMax-' to 'minimax-' for consistency with other providers (claude-, gpt-, llama-, etc.)
  • All preset definitions and equivalence mappings in models.ts updated to use lowercase IDs
  • Factory detection test updated to use minimax-m2.5

Auth entries (coderabbitai critical item 8):

  • Added minimax: 'MINIMAX_API_KEY' to PROVIDER_ENV_VARS in auth/types.ts
  • Added minimax: 'globalMiniMaxApiKey' to PROVIDER_SETTINGS_KEY in auth/types.ts
  • This ensures MiniMax API keys resolve correctly through the auth chain

Acknowledged but not changed

Factory duplication (gemini-code-assist item 3): The MiniMax provider creation in factory.ts follows the same pattern as ZAI (both use createOpenAICompatible). Keeping it consistent with existing patterns for now.

Identical auto/balanced presets (gemini-code-assist item 5): The auto and balanced presets being identical matches the existing Mistral provider pattern — both providers only have two model tiers, so the presets naturally converge.

Factory test assertion (coderabbitai nitpick item 7): The current prefix-detection test pattern is consistent with all other provider tests in the same describe block. A more thorough factory integration test could be a good follow-up but is out of scope for this PR.

Comment on lines 7 to 13
export type BuiltinProvider =
| 'anthropic' | 'openai' | 'google' | 'amazon-bedrock' | 'azure'
| 'mistral' | 'groq' | 'xai' | 'openrouter' | 'zai'
| 'ollama' | 'openai-compatible';
| 'minimax' | 'ollama' | 'openai-compatible';

export type BillingModel = 'subscription' | 'pay-per-use';

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The BUILTIN_TO_SUPPORTED map in resolver.ts is missing an entry for 'minimax', causing MiniMax accounts to be silently skipped during queue-based provider routing.
Severity: CRITICAL

Suggested Fix

Add the missing 'minimax': 'minimax' entry to the BUILTIN_TO_SUPPORTED map located in apps/desktop/src/main/ai/auth/resolver.ts to ensure the provider is correctly resolved during queue-based routing.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: apps/desktop/src/shared/types/provider-account.ts#L7-L13

Potential issue: The new 'minimax' provider is added to the `BuiltinProvider` type, but
the `BUILTIN_TO_SUPPORTED` map in `resolver.ts` was not updated. The
`resolveAuthFromQueue()` function uses this map to route requests. When it encounters a
'minimax' account, the lookup `BUILTIN_TO_SUPPORTED['minimax']` returns `undefined`,
causing the function to silently `continue` and skip the account. This effectively
disables the MiniMax provider for any multi-provider queue-based routing, which is the
primary routing mechanism in the application, rendering the integration non-functional
for its main use case.

Did we get this right? 👍 / 👎 to inform future reviews.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/desktop/src/main/ai/auth/types.ts`:
- Line 100: The minimax -> 'globalMiniMaxApiKey' setting referenced in the
mapping is missing from the settings schema, causing resolveFromProfileApiKey()
(which reads PROVIDER_SETTINGS_KEY) to never find MiniMax keys; add the
corresponding field (globalMiniMaxApiKey) to the AppSettings definition in
apps/desktop/src/shared/types/settings.ts and/or type the settings map as keyof
AppSettings so the compiler prevents this drift, ensuring
resolveFromProfileApiKey() can read MiniMax keys via PROVIDER_SETTINGS_KEY.

In `@apps/desktop/src/shared/constants/models.ts`:
- Around line 425-430: The canonical equivalence for the full MiniMax ID
'minimax-m2.5' is ambiguous: update the 'minimax-m2.5' entry in the models map
so it exhaustively includes equivalent shorthand mappings for all providers
(e.g., add mappings for 'opus' and 'sonnet' like the other full-ID rows) OR
change the reverse lookup behavior in resolveModelEquivalent() to stop
reverse-fallback when a direct full-ID entry exists; specifically modify the
'minimax-m2.5' object in models.ts to include the same provider keys
(anthropic/opus/sonnet/openai/etc.) with their intended modelIds and reasoning,
or alter resolveModelEquivalent() to prefer direct full-ID matches and avoid
selecting the first shorthand by object order.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: b9cb27e4-374a-443f-97a8-1a1e61e2a962

📥 Commits

Reviewing files that changed from the base of the PR and between e68b7fb and c4057c0.

📒 Files selected for processing (4)
  • apps/desktop/src/main/ai/auth/types.ts
  • apps/desktop/src/main/ai/config/types.ts
  • apps/desktop/src/main/ai/providers/__tests__/factory.test.ts
  • apps/desktop/src/shared/constants/models.ts

@AndyMik90 AndyMik90 self-assigned this Mar 12, 2026
- Add missing 'minimax' entry to BUILTIN_TO_SUPPORTED map in resolver.ts,
  fixing silent skip of MiniMax accounts during queue-based provider routing
- Add globalMiniMaxApiKey (and globalZAIApiKey) to AppSettings interface,
  fixing auth settings lookup for profile-specific API keys
- Differentiate model equivalence tiers: 'sonnet' tier now maps to
  minimax-m2.5-highspeed instead of minimax-m2.5 (which is reserved for
  'opus' tier), matching the pattern used by other multi-model providers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@octo-patch
Copy link
Author

Pushed commit 07df6710 to address the code review findings. Here is a summary of what was fixed:

CRITICAL fixes

  1. Missing BUILTIN_TO_SUPPORTED map entry (resolver.ts)
    Added minimax: 'minimax' to the BUILTIN_TO_SUPPORTED map. Without this, MiniMax provider accounts were silently skipped during queue-based routing in resolveAuthFromQueue() — the if (!supportedProvider) continue guard at line 379 would skip any MiniMax account since the lookup returned undefined.

  2. Missing globalMiniMaxApiKey in AppSettings (settings.ts)
    Added globalMiniMaxApiKey (and globalZAIApiKey which had the same issue) to the AppSettings interface. The auth resolver's PROVIDER_SETTINGS_KEY map references these field names, but TypeScript would not enforce their existence at the settings layer without the interface field — meaning _getSettingsValue('globalMiniMaxApiKey') would always return undefined for the profile API key fallback stage.

MAJOR fix

  1. Model equivalence tier differentiation (models.ts)
    Changed the sonnet tier MiniMax mapping from minimax-m2.5 to minimax-m2.5-highspeed. Previously both opus and sonnet tiers mapped to the same model (minimax-m2.5), defeating the purpose of tiered model selection. Now the mapping is:
    • opusminimax-m2.5 (peak performance)
    • sonnetminimax-m2.5-highspeed (balanced)
    • haikuminimax-m2.5-highspeed (fast, same as sonnet since only two models available)

MEDIUM items

  1. Lowercase model identifiers — Already addressed in the previous commit (c4057c05). All model values are lowercase.

  2. Duplicate code between factory.ts and registry.ts — This is a pre-existing architectural pattern affecting all 12 providers, not specific to MiniMax. Refactoring it here would risk regressions across the entire provider infrastructure. Recommend addressing in a dedicated refactoring PR.

Comment on lines +322 to +326
minimax: {
auto: { primaryModel: 'minimax-m2.5', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5', planning: 'minimax-m2.5', coding: 'minimax-m2.5', qa: 'minimax-m2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
balanced: { primaryModel: 'minimax-m2.5', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5', planning: 'minimax-m2.5', coding: 'minimax-m2.5', qa: 'minimax-m2.5' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
quick: { primaryModel: 'minimax-m2.5-highspeed', primaryThinking: 'low', phaseModels: { spec: 'minimax-m2.5-highspeed', planning: 'minimax-m2.5-highspeed', coding: 'minimax-m2.5-highspeed', qa: 'minimax-m2.5-highspeed' }, phaseThinking: { spec: 'low', planning: 'low', coding: 'low', qa: 'low' } },
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The MiniMax provider is missing the complex preset, causing the 'Complex Tasks' profile to silently fall back to the 'Auto' profile's settings.
Severity: MEDIUM

Suggested Fix

Add a complex preset to the minimax provider object within PROVIDER_PRESET_DEFINITIONS. This new preset should likely use the same model as the auto preset, such as minimax-m2.5, to align with the pattern used by other providers with limited model variety.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: apps/desktop/src/shared/constants/models.ts#L322-L326

Potential issue: The MiniMax provider definition in `PROVIDER_PRESET_DEFINITIONS` is
missing a `complex` preset. When a user selects the 'Complex Tasks' profile, which has
an ID of 'complex', the `getProviderPresetOrFallback` function fails to find an exact
match for `minimax.complex`. The function then silently falls back to the `minimax.auto`
preset. This results in the 'Complex Tasks' profile behaving identically to the 'Auto'
profile for the MiniMax provider, ignoring the user's selection without any warning or
indication.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/desktop/src/shared/types/settings.ts`:
- Around line 287-288: DEFAULT_APP_SETTINGS is missing initializers for the
newly added optional fields globalZAIApiKey and globalMiniMaxApiKey; update the
DEFAULT_APP_SETTINGS object (same area that currently sets globalOpenAIApiKey)
to include explicit initializers for these two keys (set them to undefined with
the same typed form used elsewhere, e.g., as string | undefined) so they are
consistently present during resets and first-run initialization.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 4fc29a66-d743-4299-8dcb-070280d8a267

📥 Commits

Reviewing files that changed from the base of the PR and between c4057c0 and 07df671.

📒 Files selected for processing (3)
  • apps/desktop/src/main/ai/auth/resolver.ts
  • apps/desktop/src/shared/constants/models.ts
  • apps/desktop/src/shared/types/settings.ts

Comment on lines +287 to +288
globalZAIApiKey?: string;
globalMiniMaxApiKey?: string;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add new API key fields to DEFAULT_APP_SETTINGS for consistency.

The new globalZAIApiKey and globalMiniMaxApiKey fields are correctly added as optional. However, DEFAULT_APP_SETTINGS in apps/desktop/src/shared/constants/config.ts (around line 51) initializes globalOpenAIApiKey but not these new keys. For consistency with other global API key fields, add explicit undefined initializers:

// In apps/desktop/src/shared/constants/config.ts DEFAULT_APP_SETTINGS:
globalZAIApiKey: undefined as string | undefined,
globalMiniMaxApiKey: undefined as string | undefined,

This ensures consistent initialization across settings resets and first-launch scenarios.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/desktop/src/shared/types/settings.ts` around lines 287 - 288,
DEFAULT_APP_SETTINGS is missing initializers for the newly added optional fields
globalZAIApiKey and globalMiniMaxApiKey; update the DEFAULT_APP_SETTINGS object
(same area that currently sets globalOpenAIApiKey) to include explicit
initializers for these two keys (set them to undefined with the same typed form
used elsewhere, e.g., as string | undefined) so they are consistently present
during resets and first-run initialization.

- Add MiniMax-M2.7-highspeed to model catalog for low-latency scenarios
- Upgrade quick preset from M2.5-highspeed to M2.7-highspeed
- Add M2.7-highspeed model equivalence entry
- Update haiku tier to map to M2.7-highspeed
- Update provider detection test for M2.7-highspeed
@octo-patch octo-patch changed the title feat: add MiniMax as a new LLM provider feat: add MiniMax as LLM provider with M2.7 and M2.7-highspeed models Mar 18, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
apps/desktop/src/main/ai/providers/__tests__/factory.test.ts (1)

166-170: 🧹 Nitpick | 🔵 Trivial

Harden MiniMax coverage with a factory-path assertion.

Line 166-Line 170 verifies prefix detection only. With the shared @ai-sdk/openai-compatible mock, the suite can still pass even if MiniMax is wired with an incorrect name or default baseURL.

🧪 Suggested test hardening
+import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
 import { createAnthropic } from '@ai-sdk/anthropic';
 import { createProvider, detectProviderFromModel, createProviderFromModelId } from '../factory';
 import { SupportedProvider } from '../types';
   it('detects MiniMax from minimax- prefix', () => {
     expect(detectProviderFromModel('minimax-m2.7')).toBe('minimax');
     expect(detectProviderFromModel('minimax-m2.7-highspeed')).toBe('minimax');
     expect(detectProviderFromModel('minimax-m2.5')).toBe('minimax');
   });
+
+  it('wires MiniMax through the OpenAI-compatible adapter', () => {
+    createProvider({
+      config: { provider: SupportedProvider.MiniMax, apiKey: 'test-key' },
+      modelId: 'minimax-m2.7',
+    });
+
+    expect(createOpenAICompatible).toHaveBeenCalledWith(
+      expect.objectContaining({
+        name: 'minimax',
+        apiKey: 'test-key',
+        baseURL: 'https://api.minimax.io/v1',
+      }),
+    );
+  });

As per coding guidelines, apps/desktop/**/*.test.{ts,tsx}: Ensure tests are comprehensive and follow Vitest conventions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/desktop/src/main/ai/providers/__tests__/factory.test.ts` around lines
166 - 170, The test only checks prefix detection via
detectProviderFromModel('minimax-...') but must also assert the actual provider
factory wiring; update the test to call the provider factory resolver (e.g.,
createProviderFromModel or getProviderByModel) for 'minimax-m2.7' and assert the
returned provider metadata contains the expected factory/name/baseURL values for
MiniMax (for example provider.name === 'minimax' and provider.baseURL matches
the MiniMax factory URL or provider.factoryPath matches the MiniMax factory
module) so the test fails if MiniMax is wired with the wrong implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/desktop/src/shared/constants/providers.ts`:
- Around line 65-69: Update the provider entry with id 'minimax' by editing its
description field to include the missing "minimax-m2.5-highspeed" model name so
the text lists "MiniMax M2.7, M2.7-highspeed, minimax-m2.5-highspeed and M2.5"
(or otherwise include that model) to match the model catalog; locate the object
with id: 'minimax' and adjust the description string accordingly.

---

Duplicate comments:
In `@apps/desktop/src/main/ai/providers/__tests__/factory.test.ts`:
- Around line 166-170: The test only checks prefix detection via
detectProviderFromModel('minimax-...') but must also assert the actual provider
factory wiring; update the test to call the provider factory resolver (e.g.,
createProviderFromModel or getProviderByModel) for 'minimax-m2.7' and assert the
returned provider metadata contains the expected factory/name/baseURL values for
MiniMax (for example provider.name === 'minimax' and provider.baseURL matches
the MiniMax factory URL or provider.factoryPath matches the MiniMax factory
module) so the test fails if MiniMax is wired with the wrong implementation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 67b698a7-9d5c-4a88-903c-20851ea39a0e

📥 Commits

Reviewing files that changed from the base of the PR and between 07df671 and 4971b1d.

📒 Files selected for processing (3)
  • apps/desktop/src/main/ai/providers/__tests__/factory.test.ts
  • apps/desktop/src/shared/constants/models.ts
  • apps/desktop/src/shared/constants/providers.ts

Comment on lines +65 to +69
id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed and M2.5 models',
category: 'popular',
authMethods: ['api-key'], envVars: ['MINIMAX_API_KEY'],
configFields: ['baseUrl'], website: 'https://platform.minimax.io',
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Provider description omits minimax-m2.5-highspeed.

Line 65 is missing one currently supported MiniMax model, which makes the UI description inconsistent with the model catalog.

✏️ Suggested text fix
-    id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed and M2.5 models',
+    id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed, M2.5, and M2.5-highspeed models',
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed and M2.5 models',
category: 'popular',
authMethods: ['api-key'], envVars: ['MINIMAX_API_KEY'],
configFields: ['baseUrl'], website: 'https://platform.minimax.io',
},
id: 'minimax', name: 'MiniMax', description: 'MiniMax M2.7, M2.7-highspeed, M2.5, and M2.5-highspeed models',
category: 'popular',
authMethods: ['api-key'], envVars: ['MINIMAX_API_KEY'],
configFields: ['baseUrl'], website: 'https://platform.minimax.io',
},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/desktop/src/shared/constants/providers.ts` around lines 65 - 69, Update
the provider entry with id 'minimax' by editing its description field to include
the missing "minimax-m2.5-highspeed" model name so the text lists "MiniMax M2.7,
M2.7-highspeed, minimax-m2.5-highspeed and M2.5" (or otherwise include that
model) to match the model catalog; locate the object with id: 'minimax' and
adjust the description string accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants