Skip to content

feat: add MiniMax as a dedicated LLM provider#222

Open
octo-patch wants to merge 1 commit intoagentscope-ai:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as a dedicated LLM provider#222
octo-patch wants to merge 1 commit intoagentscope-ai:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown
Contributor

Summary

Add MiniMax as a dedicated LLM provider with first-class support in the installer, AI Gateway setup, and model management scripts.

Supported Models

  • MiniMax-M2.5 - Peak Performance. Ultimate Value. Master the Complex
  • MiniMax-M2.5-highspeed - Same performance, faster and more agile

Changes

Installer (bash + PowerShell)

  • Add MiniMax as provider option 3 with bilingual i18n messages (zh/en)
  • Add model submenu: MiniMax-M2.5 (recommended) / MiniMax-M2.5-highspeed
  • Add API key prompt with MiniMax platform link
  • Add connectivity test against https://api.minimax.io/v1

Higress AI Gateway (setup-higress.sh)

  • Add dedicated minimax provider case with DNS service source registration
  • Configure OpenAI-compatible provider pointing to https://api.minimax.io/v1

Model Metadata (4 scripts)

  • Add MiniMax-M2.5-highspeed to context window mappings (CTX=200000, MAX=128000)
  • Updated scripts: start-manager-agent.sh, update-manager-model.sh, update-worker-model.sh, generate-worker-config.sh

Documentation (2 SKILL.md files)

  • Update model tables to include MiniMax-M2.5-highspeed

API Documentation

Testing

  • Shell script syntax validation (bash -n) passes for all modified scripts
  • All changes consistent across 9 files
  • No existing functionality affected (additive changes only)

- Add dedicated MiniMax provider case in Higress AI Gateway setup
  (setup-higress.sh) with DNS service source and OpenAI-compatible
  provider configuration pointing to https://api.minimax.io/v1
- Add MiniMax provider option (option 3) in both bash and PowerShell
  installers with model submenu (MiniMax-M2.5 / MiniMax-M2.5-highspeed),
  API key prompt, and bilingual i18n messages (zh/en)
- Add MiniMax-M2.5-highspeed to model context window mappings across
  all 4 model metadata scripts with CTX=200000, MAX=128000
- Update SKILL.md documentation for both model-switch and
  worker-model-switch skills to include MiniMax-M2.5-highspeed
@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 12, 2026

📊 CI Metrics Report

ℹ️ No baseline available - This is the first run or baseline data was not found.

Summary

Metric Value
LLM Calls 409
Input Tokens 11770894
Output Tokens 113128

By Role

Role LLM Calls Input Tokens Output Tokens
🧠 Manager 220 7967337 59894
🔧 Workers 189 3803557 53234

Per-Test Breakdown

Test Mgr Calls Wkr Calls Mgr In Wkr In Mgr Out Wkr Out
02-create-worker 40 0 959805 0 7778 0
03-assign-task 19 28 523514 481641 4906 4836
04-human-intervene 30 19 870022 437777 9590 4541
05-heartbeat 9 4 396484 125550 2379 1289
06-multi-worker 51 46 2127692 927713 13102 19350
14-git-collab 71 92 3089820 1830876 22139 23218

Generated by HiClaw CI on 2026-03-13 04:13:04 UTC

@johnlanni
Copy link
Copy Markdown
Collaborator

@octo-patch Please sign the CLA

@octo-patch octo-patch closed this Mar 13, 2026
@octo-patch octo-patch reopened this Mar 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants