Skip to content

feat: add MiniMax provider support with M2.7 model#144

Open
octo-patch wants to merge 3 commits intomemovai:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax provider support with M2.7 model#144
octo-patch wants to merge 3 commits intomemovai:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 12, 2026

Summary

Add MiniMax as a new LLM provider option alongside Anthropic and OpenAI. MiniMax uses the OpenAI-compatible API format, so the existing OpenAI code path is cleanly reused.

Supported Models

  • MiniMax-M2.7 — Latest flagship model with enhanced reasoning and coding capabilities
  • MiniMax-M2.7-highspeed — High-speed version of M2.7 for low-latency scenarios
  • MiniMax-M2.5 — Peak Performance, Ultimate Value (previous generation)
  • MiniMax-M2.5-highspeed — Fast version of M2.5

Changes

  • Add MiniMax API URL and provider detection in llm_proxy.c
  • Add minimax as valid option for set_model_provider CLI command
  • Update all READMEs (EN/CN/JA) with MiniMax usage examples using M2.7
  • Add MiniMax API key placeholder in mimi_secrets.h.example

Usage

Testing

  • Verified MiniMax provider routes correctly through OpenAI-compatible code path
  • API URL resolves to https://api.minimax.io/v1/chat/completions

Summary by CodeRabbit

  • New Features

    • Added MiniMax as a supported LLM provider; users can configure it and select MiniMax models (e.g., MiniMax-M2.7).
  • Documentation

    • Updated English, Chinese, and Japanese docs and quickstart examples to include MiniMax and updated CLI usage.
  • Chores

    • CLI help and examples expanded to show MiniMax as a valid provider and provider-selection commands.

Add MiniMax (https://platform.minimax.io) as a third provider option
alongside Anthropic and OpenAI. MiniMax uses OpenAI-compatible API
format, so the existing OpenAI code path is reused via a new
provider_uses_openai_format() helper.

Changes:
- Add MIMI_MINIMAX_API_URL constant (https://api.minimax.io/v1)
- Add provider_is_minimax() and provider_uses_openai_format() helpers
- Route MiniMax requests to api.minimax.io with Bearer auth
- Update CLI help text to include minimax option
- Update mimi_secrets.h.example with MiniMax documentation
- Update README.md, README_CN.md, README_JA.md with MiniMax info

Supported models: MiniMax-M2.5, MiniMax-M2.5-highspeed

Usage:
  mimi> set_model_provider minimax
  mimi> set_api_key <MINIMAX_API_KEY>
  mimi> set_model MiniMax-M2.5
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 12, 2026

📝 Walkthrough

Walkthrough

Adds MiniMax as a third LLM provider alongside Anthropic and OpenAI. Updates documentation, build-time secret examples, CLI help text, and llm proxy logic to treat MiniMax as OpenAI-format (URL/host/path, headers, message formatting, and response parsing).

Changes

Cohort / File(s) Summary
Documentation
README.md, README_CN.md, README_JA.md
Add MiniMax to provider lists, quickstart, CLI examples, and feature/descriptive text across English, Chinese, and Japanese READMEs.
Configuration
main/mimi_config.h
Add MIMI_MINIMAX_API_URL macro pointing to Minimax API endpoint.
Secrets / Examples
main/mimi_secrets.h.example
Update section label and inline comment for MIMI_SECRET_MODEL_PROVIDER to allow "anthropic", "openai", or "minimax".
CLI Help Text
main/cli/serial_cli.c
Help text for set_model_provider now lists `(anthropic
LLM Proxy Core Logic
main/llm/llm_proxy.c
Add provider_is_minimax() and provider_uses_openai_format(); route Minimax to OpenAI-compatible URL/host/path, use Authorization header and OpenAI-style message formatting/parsing when provider_uses_openai_format() is true; maintain legacy headers/format for non-OpenAI-format providers.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client as Client
participant Proxy as LLM Proxy
participant Provider as External LLM (Anthropic / OpenAI / MiniMax)

Client->>Proxy: Send chat request (provider selection + messages)
Proxy->>Proxy: provider_uses_openai_format()? → yes/no
alt OpenAI-format (OpenAI or MiniMax)
    Proxy->>Provider: POST to OpenAI-style endpoint\n(Authorization header, /v1/chat/completions, OpenAI messages)
    Provider-->>Proxy: OpenAI-style response (choices/messages/tool_calls)
else Legacy-format (Anthropic)
    Proxy->>Provider: POST to Anthropic-style endpoint\n(x-api-key, anthropic-version, /v1/messages, legacy body)
    Provider-->>Proxy: Legacy-style response (stop_reason, content blocks)
end
Proxy->>Client: Normalized chat response

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 A nibble, a hop, a bright new fix,
MiniMax joins the LLM mix.
Three hats upon the stack tonight,
Messages formatted just right—
Hooray! I twirl, then bound with glee. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add MiniMax provider support with M2.7 model' clearly and concisely summarizes the main change: adding MiniMax as a new LLM provider with M2.7 model support. It is specific, directly related to the primary objective, and follows conventional commit format.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@IRONICBo IRONICBo self-requested a review March 12, 2026 06:41
@IRONICBo IRONICBo self-assigned this Mar 12, 2026
@IRONICBo IRONICBo added the enhancement New feature or request label Mar 12, 2026
PR Bot added 2 commits March 18, 2026 14:58
- Update MiniMax model reference from M2.5 to M2.7 in all README docs
- MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding
@octo-patch octo-patch changed the title feat: add MiniMax provider support feat: add MiniMax provider support with M2.7 model Mar 18, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
README_CN.md (1)

34-34: 建议润色这句中文表达。

Line 34 的“所有数据存在本地 Flash”读起来略生硬,建议改为“所有数据都保存在本地 Flash 中”。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README_CN.md` at line 34, 替换 README_CN.md 中句子“所有数据存在本地
Flash”为更通顺的表述,例如“所有数据都保存在本地 Flash 中”;定位并修改包含原句的段落(包含“你在 Telegram 发一条消息,ESP32-S3
通过 WiFi 收到后送进 Agent 循环 — LLM 思考、调用工具、读取记忆 — 再把回复发回来。”的那一行)以保留原意并统一语体。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README_CN.md`:
- Around line 298-301: README_CN.md has inconsistent wording: the "多提供商" bullet
lists three providers (Anthropic/Claude, OpenAI/GPT, MiniMax) but the "工具调用"
bullet incorrectly says "两种提供商". Update the "工具调用" line to either say
"三种提供商均支持工具调用" to match the three providers, or explicitly state MiniMax's
limitation (e.g., "Anthropic 和 OpenAI 支持工具调用,MiniMax 暂不支持") so the README
consistently reflects MiniMax's support status; adjust the bullets containing
"多提供商" and "工具调用" accordingly.

---

Nitpick comments:
In `@README_CN.md`:
- Line 34: 替换 README_CN.md 中句子“所有数据存在本地 Flash”为更通顺的表述,例如“所有数据都保存在本地 Flash
中”;定位并修改包含原句的段落(包含“你在 Telegram 发一条消息,ESP32-S3 通过 WiFi 收到后送进 Agent 循环 — LLM
思考、调用工具、读取记忆 — 再把回复发回来。”的那一行)以保留原意并统一语体。

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 910cde09-b3ab-4081-99e0-1c7a11ed023b

📥 Commits

Reviewing files that changed from the base of the PR and between be59181 and 63a16fc.

📒 Files selected for processing (4)
  • README.md
  • README_CN.md
  • README_JA.md
  • main/mimi_config.h
🚧 Files skipped from review as they are similar to previous changes (3)
  • main/mimi_config.h
  • README_JA.md
  • README.md

Comment on lines +298 to 301
- **多提供商** — 同时支持 Anthropic (Claude)OpenAI (GPT) 和 MiniMax,运行时可切换
- **定时任务** — AI 可自主创建周期性和一次性任务,重启后持久保存
- **心跳服务** — 定期检查任务文件,驱动 AI 自主执行
- **工具调用** — ReAct Agent 循环,两种提供商均支持工具调用
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

“多提供商”与“工具调用”描述数量不一致。

Line 298 写的是三家提供商,但 Line 301 仍写“两种提供商均支持工具调用”。这会让读者误解 MiniMax 的工具调用支持状态。建议统一为“三种”,或明确注明 MiniMax 的限制(如果有)。

✏️ 建议修订
-- **工具调用** — ReAct Agent 循环,两种提供商均支持工具调用
+- **工具调用** — ReAct Agent 循环,三种提供商均支持工具调用
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- **多提供商** — 同时支持 Anthropic (Claude)、OpenAI (GPT) 和 MiniMax,运行时可切换
- **定时任务** — AI 可自主创建周期性和一次性任务,重启后持久保存
- **心跳服务** — 定期检查任务文件,驱动 AI 自主执行
- **工具调用** — ReAct Agent 循环,两种提供商均支持工具调用
- **多提供商** — 同时支持 Anthropic (Claude)、OpenAI (GPT) 和 MiniMax,运行时可切换
- **定时任务** — AI 可自主创建周期性和一次性任务,重启后持久保存
- **心跳服务** — 定期检查任务文件,驱动 AI 自主执行
- **工具调用** — ReAct Agent 循环,三种提供商均支持工具调用
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README_CN.md` around lines 298 - 301, README_CN.md has inconsistent wording:
the "多提供商" bullet lists three providers (Anthropic/Claude, OpenAI/GPT, MiniMax)
but the "工具调用" bullet incorrectly says "两种提供商". Update the "工具调用" line to either
say "三种提供商均支持工具调用" to match the three providers, or explicitly state MiniMax's
limitation (e.g., "Anthropic 和 OpenAI 支持工具调用,MiniMax 暂不支持") so the README
consistently reflects MiniMax's support status; adjust the bullets containing
"多提供商" and "工具调用" accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request provider

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants