feat: add MiniMax as LLM provider#5
Open
octo-patch wants to merge 1 commit intoSalesforceAIResearch:mainfrom
Open
feat: add MiniMax as LLM provider#5octo-patch wants to merge 1 commit intoSalesforceAIResearch:mainfrom
octo-patch wants to merge 1 commit intoSalesforceAIResearch:mainfrom
Conversation
Add MiniMax AI (https://www.minimax.io/) as a first-class LLM provider alongside existing OpenAI, Anthropic, Groq, Google, and SambaNova backends. Changes: - llm_clients.py: MiniMaxClient with OpenAI-compat streaming, temperature clamping (0,1], and think-tag stripping; MODEL_CONFIGS entry with MiniMax-M2.7 and MiniMax-M2.7-highspeed models; get_llm_client() and get_async_llm_client() factory support - src/configuration.py: MINIMAX enum in LLMProvider, default model mapping - .env.sample: MINIMAX_API_KEY and provider config - InitialScreen.js: MiniMax-M2.7 model option in frontend dropdown - README.md: MiniMax in supported models table and API key docs - tests/test_minimax_provider.py: 25 unit + 3 integration tests
|
Thanks for the contribution! Unfortunately we can't verify the commit author(s): PR Bot <p***@m***.com>. One possible solution is to add that email to your GitHub account. Alternatively you can change your commits to another email and force push the change. After getting your commits associated with your GitHub account, sign the Salesforce Inc. Contributor License Agreement and this Pull Request will be revalidated. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Add MiniMax AI as a first-class LLM provider for Enterprise Deep Research, alongside existing OpenAI, Anthropic, Groq, Google Vertex AI, and SambaNova backends.
Changes
Backend (llm_clients.py)
MiniMaxClientclass with OpenAI-compatible streaming viahttps://api.minimax.io/v1<think>...</think>tag stripping for reasoning model responsesMODEL_CONFIGSentry with MiniMax-M2.7 (1M context) and MiniMax-M2.7-highspeed modelsget_llm_client()andget_async_llm_client()factory supportConfiguration (src/configuration.py)
MINIMAXadded toLLMProviderenumMiniMax-M2.7) inllm_modelpropertyFrontend (InitialScreen.js)
Docs & Config
.env.sample:MINIMAX_API_KEYvariable and provider configREADME.md: MiniMax in supported models table and API key docsTests (tests/test_minimax_provider.py)
Test Plan