Skip to content

feat: add MiniMax as first-class LLM provider#121

Open
octo-patch wants to merge 1 commit intoNarcooo:masterfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#121
octo-patch wants to merge 1 commit intoNarcooo:masterfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class LLM provider alongside OpenAI and Anthropic.

MiniMax offers OpenAI-compatible API with powerful models:

  • MiniMax-M2.7 — latest model, 204K context window
  • MiniMax-M2.7-highspeed — optimized for speed, 204K context window

Changes

  • Provider enum: Add "minimax" to LLMConfigSchema and AgentLLMOverrideSchema
  • Factory routing: Handle minimax in createLLMClient() — uses OpenAI SDK internally with MiniMax default base URL
  • Temperature clamping: MiniMax requires temperature in (0, 1] — auto-clamped in chatCompletion() and chatWithTools()
  • CLI: Update config set-global and init with minimax provider option
  • Docs: Update README (zh/en/ja) with MiniMax quick setup and .env.example

Quick setup

inkos config set-global \
  --provider minimax \
  --base-url https://api.minimax.io/v1 \
  --api-key <your-minimax-api-key> \
  --model MiniMax-M2.7

Files changed (11 files, ~474 additions)

File Change
packages/core/src/models/project.ts Add minimax to provider enums
packages/core/src/llm/provider.ts Factory routing, temp clamping
packages/core/src/utils/config-loader.ts API key required for minimax
packages/cli/src/commands/config.ts Update provider option text
packages/cli/src/commands/init.ts Add minimax example in generated .env
.env.example MiniMax configuration example
README.md / README.en.md / README.ja.md MiniMax quick setup docs
packages/core/src/__tests__/minimax-provider.test.ts 17 unit tests
packages/core/src/__tests__/minimax-integration.test.ts 3 integration tests

Test plan

  • 17 unit tests covering schema validation, client creation, temperature clamping, streaming, and tool calling
  • 3 integration tests against real MiniMax API (sync, streaming, system messages)
  • All existing tests pass (pre-existing timeouts unrelated to this change)
  • Manual: inkos config set-global --provider minimax --base-url https://api.minimax.io/v1 --api-key <key> --model MiniMax-M2.7
  • Manual: inkos doctor connectivity check with MiniMax

Add MiniMax (MiniMax-M2.7, MiniMax-M2.7-highspeed) as a built-in provider
alongside OpenAI and Anthropic. MiniMax uses OpenAI-compatible API at
api.minimax.io/v1 with automatic temperature clamping to (0, 1].

Changes:
- Add "minimax" to provider enum in LLMConfigSchema and AgentLLMOverrideSchema
- Handle minimax routing in createLLMClient() with default base URL
- Clamp temperature in both chatCompletion() and chatWithTools() for minimax
- Update CLI config commands (set-global, init) with minimax provider option
- Update .env.example with MiniMax configuration example
- Update README (zh/en/ja) with MiniMax quick setup docs
- Add 17 unit tests and 3 integration tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant