Skip to content

feat: add MiniMax model compatibility support#162

Open
octo-patch wants to merge 1 commit into666ghj:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax model compatibility support#162
octo-patch wants to merge 1 commit into666ghj:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

  • Add automatic MiniMax model detection via model name and base URL
  • Handle MiniMax API constraints: clamp temperature to 0.01 when set to 0 (MiniMax requires temperature ∈ (0.0, 1.0]), and replace unsupported response_format with prompt-based JSON instruction injection
  • Strip <think> tags from MiniMax M2.5 model responses in simulation_config_generator.py and oasis_profile_generator.py
  • Add MiniMax configuration examples to .env.example and documentation to README.md / README-EN.md
  • Add 23 unit tests for all MiniMax compatibility functions

Changes

File Description
backend/app/utils/llm_client.py Add _is_minimax(), _clamp_temperature(), _inject_json_instruction() helpers; update LLMClient.chat() to conditionally skip response_format for MiniMax
backend/app/services/simulation_config_generator.py Use MiniMax-aware JSON parsing and <think> tag stripping in _call_llm_with_retry()
backend/app/services/oasis_profile_generator.py Use MiniMax-aware JSON parsing and <think> tag stripping in profile generation
.env.example Add commented MiniMax configuration section
README.md Add collapsible MiniMax setup guide (Chinese)
README-EN.md Add collapsible MiniMax setup guide (English)
backend/tests/test_minimax_compat.py 23 unit tests covering detection, temperature clamping, JSON injection, and response parsing

Test plan

  • All 23 unit tests pass (pytest backend/tests/test_minimax_compat.py)
  • Verify MiniMax M2.5 model works end-to-end with simulation (requires MiniMax API key)
  • Verify existing non-MiniMax models (e.g., qwen-plus) are unaffected

MiniMax Configuration

LLM_API_KEY=your_minimax_api_key
LLM_BASE_URL=https://api.minimax.io/v1
LLM_MODEL_NAME=MiniMax-M2.5

Supported models: MiniMax-M2.5 (flagship, 204K context) and MiniMax-M2.5-highspeed (faster variant).

🤖 Generated with Claude Code

- Add MiniMax model detection and compatibility handling in LLMClient
- Handle response_format incompatibility: MiniMax does not support
  response_format parameter, use prompt engineering for JSON output
- Add temperature clamping for MiniMax (must be > 0)
- Add robust JSON parsing from LLM responses (parse_json_from_response)
- Update simulation_config_generator and oasis_profile_generator for
  MiniMax compatibility
- Add MiniMax configuration examples in .env.example
- Add MiniMax documentation in README.md and README-EN.md
- Add unit tests for MiniMax compatibility functions

Supported models: MiniMax-M2.5, MiniMax-M2.5-highspeed
API docs: https://platform.minimax.io/docs/api-reference/text-openai-api
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. documentation Improvements or additions to documentation LLM API Any questions regarding the LLM API labels Mar 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation LLM API Any questions regarding the LLM API size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant