Skip to content

feat: add MiniMax as LLM provider#5

Open
octo-patch wants to merge 1 commit intoSalesforceAIResearch:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider#5
octo-patch wants to merge 1 commit intoSalesforceAIResearch:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax AI as a first-class LLM provider for Enterprise Deep Research, alongside existing OpenAI, Anthropic, Groq, Google Vertex AI, and SambaNova backends.

Changes

Backend (llm_clients.py)

  • New MiniMaxClient class with OpenAI-compatible streaming via https://api.minimax.io/v1
  • Temperature clamping to (0.0, 1.0] as required by MiniMax API
  • <think>...</think> tag stripping for reasoning model responses
  • MODEL_CONFIGS entry with MiniMax-M2.7 (1M context) and MiniMax-M2.7-highspeed models
  • get_llm_client() and get_async_llm_client() factory support

Configuration (src/configuration.py)

  • MINIMAX added to LLMProvider enum
  • Default model mapping (MiniMax-M2.7) in llm_model property

Frontend (InitialScreen.js)

  • MiniMax-M2.7 option in model selection dropdown

Docs & Config

  • .env.sample: MINIMAX_API_KEY variable and provider config
  • README.md: MiniMax in supported models table and API key docs

Tests (tests/test_minimax_provider.py)

  • 25 unit tests covering MODEL_CONFIGS, MiniMaxClient, temperature clamping, think-tag stripping, LLMProvider enum, factory functions, Configuration class
  • 3 integration tests (skipped without API key)

Test Plan

  • All 25 unit tests pass
  • All 3 integration tests pass with live MiniMax API
  • Verify frontend model dropdown shows MiniMax-M2.7 option
  • Run existing test suite to confirm no regressions

Add MiniMax AI (https://www.minimax.io/) as a first-class LLM provider
alongside existing OpenAI, Anthropic, Groq, Google, and SambaNova backends.

Changes:
- llm_clients.py: MiniMaxClient with OpenAI-compat streaming, temperature
  clamping (0,1], and think-tag stripping; MODEL_CONFIGS entry with
  MiniMax-M2.7 and MiniMax-M2.7-highspeed models; get_llm_client() and
  get_async_llm_client() factory support
- src/configuration.py: MINIMAX enum in LLMProvider, default model mapping
- .env.sample: MINIMAX_API_KEY and provider config
- InitialScreen.js: MiniMax-M2.7 model option in frontend dropdown
- README.md: MiniMax in supported models table and API key docs
- tests/test_minimax_provider.py: 25 unit + 3 integration tests
@salesforce-cla
Copy link
Copy Markdown

Thanks for the contribution! Unfortunately we can't verify the commit author(s): PR Bot <p***@m***.com>. One possible solution is to add that email to your GitHub account. Alternatively you can change your commits to another email and force push the change. After getting your commits associated with your GitHub account, sign the Salesforce Inc. Contributor License Agreement and this Pull Request will be revalidated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant