Skip to content

feat: add MiniMax as 4th LLM provider (M2.7, 204K context)#18

Open
octo-patch wants to merge 1 commit intoY-Research-SBU:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as 4th LLM provider (M2.7, 204K context)#18
octo-patch wants to merge 1 commit intoY-Research-SBU:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class LLM provider alongside OpenAI, Anthropic, and Qwen. MiniMax offers the M2.7 model with 204K context window and an OpenAI-compatible API, making integration seamless via ChatOpenAI with a custom base URL — no new dependencies required.

Changes

  • default_config.py: Add minimax_api_key config field
  • trading_graph.py: Add MiniMax to _get_api_key(), _create_llm() (with temperature clamping to 0.01–1.0), and update_api_key()
  • web_interface.py: Add MiniMax to provider validation, API key management, key status, and error handling endpoints
  • templates/demo_new.html: Add MiniMax to LLM provider dropdown and API key input group in the web UI
  • README.md / README_CN.md: Add MiniMax environment variable setup and acknowledgement
  • tests/: 19 unit tests + 3 integration tests covering config, factory, temperature clamping, API key lifecycle, provider switching, and web endpoints

MiniMax Models

Model Context Use Case
MiniMax-M2.7 204K tokens Agent + Graph LLM (default)
MiniMax-M2.7-highspeed 204K tokens Faster inference

Usage

export MINIMAX_API_KEY="your_minimax_api_key_here"
python web_interface.py
# Select "MiniMax" from the LLM Provider dropdown

Or via config:

config = {
    "agent_llm_provider": "minimax",
    "graph_llm_provider": "minimax",
    "agent_llm_model": "MiniMax-M2.7",
    "graph_llm_model": "MiniMax-M2.7",
    "minimax_api_key": "your-key"
}
tg = TradingGraph(config=config)

Test Plan

  • 19 unit tests pass (config, factory, temp clamping, API key, web endpoints, provider switching)
  • 3 integration tests pass (real MiniMax API calls with M2.7-highspeed)
  • Manual: select MiniMax in web UI dropdown, enter API key, run analysis

Notes

  • MiniMax uses the existing ChatOpenAI class with openai_api_base="https://api.minimax.io/v1" — zero new dependencies
  • Temperature is clamped to (0.01, 1.0] per MiniMax API requirements
  • All existing OpenAI/Anthropic/Qwen functionality remains unchanged

Add MiniMax (M2.7/M2.7-highspeed, 204K context) as a first-class LLM
provider alongside OpenAI, Anthropic, and Qwen. MiniMax uses an
OpenAI-compatible API via ChatOpenAI with custom base URL, so no new
dependencies are required.

Changes:
- default_config.py: Add minimax_api_key config field
- trading_graph.py: Add MiniMax to _get_api_key(), _create_llm(),
  update_api_key() with temperature clamping (0.01-1.0)
- web_interface.py: Add MiniMax to provider validation, API key
  management, and status endpoints
- templates/demo_new.html: Add MiniMax to provider dropdown and
  API key input group in web UI
- README.md, README_CN.md: Add MiniMax env var docs and acknowledgement
- tests/: 19 unit tests + 3 integration tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant