Skip to content

Conversation

@coohu
Copy link

@coohu coohu commented Dec 4, 2025

Related Issues or Context

This PR contains Changes to Non-Plugin

  • Documentation
  • Other

This PR contains Changes to Non-LLM Models Plugin

  • I have Run Comprehensive Tests Relevant to My Changes

This PR contains Changes to LLM Models Plugin

  • My Changes Affect Message Flow Handling (System Messages and User→Assistant Turn-Taking)
  • My Changes Affect Tool Interaction Flow (Multi-Round Usage and Output Handling, for both Agent App and Agent Node)
  • My Changes Affect Multimodal Input Handling (Images, PDFs, Audio, Video, etc.)
  • My Changes Affect Multimodal Output Generation (Images, Audio, Video, etc.)
  • My Changes Affect Structured Output Format (JSON, XML, etc.)
  • My Changes Affect Token Consumption Metrics
  • My Changes Affect Other LLM Functionalities (Reasoning Process, Grounding, Prompt Caching, etc.)
  • Other Changes (Add New Models, Fix Model Parameters etc.)

Version Control (Any Changes to the Plugin Will Require Bumping the Version)

  • I have Bumped Up the Version in Manifest.yaml (Top-Level Version Field, Not in Meta Section)

Dify Plugin SDK Version

  • I have Ensured dify_plugin>=0.3.0,<0.6.0 is in requirements.txt (SDK docs)

Environment Verification (If Any Code Changes)

Local Deployment Environment

  • Dify Version is: , I have Tested My Changes on Local Deployment Dify with a Clean Environment That Matches the Production Configuration.

SaaS Environment

  • I have Tested My Changes on cloud.dify.ai with a Clean Environment That Matches the Production Configuration

@dosubot dosubot bot added size:XXL This PR changes 1000+ lines, ignoring generated files. enhancement New feature or request labels Dec 4, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @coohu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates ShengSuanYun as a new model provider, offering a broad selection of Large Language Models and text embedding capabilities. The integration includes detailed model configurations, custom API handling logic, and an automated publishing workflow, enhancing the platform's AI resource options and developer experience.

Highlights

  • New Model Provider Integration: Introduced ShengSuanYun as a new model provider, significantly expanding the available Large Language Models (LLMs) and text embedding models within the platform.
  • Extensive Model Support: Added a wide array of LLM and text embedding models from various vendors including Ali, Anthropic, ByteDance, DeepSeek, Google, Meta.AI, MiniMax, Moonshot, OpenAI, Streamlake, Tencent, Baidu, and Bigmodel, each with detailed configuration and pricing.
  • Automated Plugin Publishing Workflow: Implemented a GitHub Actions workflow (plugin-publish.yml) to automate the packaging and creation of pull requests for plugin updates upon release, streamlining the deployment process.
  • Enhanced Model Handling Logic: Developed custom Python classes (ShengsuanyunLargeLanguageModel and ShengsuanyunTextEmbeddingModel) to manage ShengSuanYun's specific API interactions, including credential updates, token calculation, and handling of multimodal content for non-vision-enabled models.
  • Plugin Metadata and Documentation: Included comprehensive metadata in manifest.yaml (version 0.1.1), a README.md for quick setup, and a GUIDE.md for Dify plugin development, along with necessary .gitignore and .difyignore files.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/plugin-publish.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces ShengSuanYun as a new provider for LLM and text embedding models, adding a significant number of new models. The implementation, however, contains several critical issues in the Python code concerning credential handling, URL construction, and token calculation logic, which could lead to runtime errors. Additionally, there are minor inconsistencies in model labels within the YAML configuration files. I have provided detailed feedback and code suggestions to address these problems and improve the robustness and correctness of the new provider integration.

Comment on lines 31 to 34
credentials["endpoint_url"] = "https://router.shengsuanyun.com/api/v1"
if credentials.get("base_url"):
api_base = credentials["base_url"].rstrip("/")
credentials_kwargs["base_url"] = api_base + "/v1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The variable credentials_kwargs is not defined in this scope, which will cause a NameError at runtime if a base_url is provided in the credentials. Additionally, the logic for constructing the URL is not robust, as it can lead to a malformed URL with a double /v1 suffix (e.g., .../v1/v1) if the provided base_url already includes it. The suggested change ensures the endpoint_url is constructed correctly, handling both cases and preventing runtime errors.

        base_url = credentials.get("base_url")
        if base_url:
            api_base = base_url.rstrip("/")
            if not api_base.endswith("/v1"):
                credentials["endpoint_url"] = f"{api_base}/v1"
            else:
                credentials["endpoint_url"] = api_base
        else:
            credentials["endpoint_url"] = "https://router.shengsuanyun.com/api/v1"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

Comment on lines 244 to 247
assert prompt_messages[0].content is not None
prompt_tokens = self._num_tokens_from_string(model, prompt_messages[0].content)
assert assistant_message.content is not None
completion_tokens = self._num_tokens_from_string(model, assistant_message.content)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The fallback logic for token calculation is incorrect and will likely cause runtime errors.

  1. prompt_tokens is calculated based only on the content of the first message (prompt_messages[0]). It should account for all messages in the prompt to be accurate.
  2. completion_tokens is calculated using assistant_message.content, which is a list of PromptMessageContent objects, not a string. This will raise a TypeError. You should use the response_content string variable instead.
            prompt_text = ""
            for msg in prompt_messages:
                if isinstance(msg.content, str):
                    prompt_text += msg.content
                elif isinstance(msg.content, list):
                    for content_part in msg.content:
                        if isinstance(content_part, TextPromptMessageContent):
                            prompt_text += content_part.data
            prompt_tokens = self._num_tokens_from_string(model, prompt_text)
            completion_tokens = self._num_tokens_from_string(model, response_content)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines 251 to 254
credentials["extra_headers"] = {
"HTTP-Referer": "https://dify.ai/",
"X-Title": "Dify"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The extra_headers are added to the credentials dictionary, but this dictionary is not used by the openai.OpenAI client, which is initialized with credentials_kwargs. As a result, the headers will not be sent with the request. These headers should be added to credentials_kwargs using the default_headers key to ensure they are included in the API call.

Suggested change
credentials["extra_headers"] = {
"HTTP-Referer": "https://dify.ai/",
"X-Title": "Dify"
}
credentials_kwargs["default_headers"] = {
"HTTP-Referer": "https://dify.ai/",
"X-Title": "Dify",
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected

@@ -0,0 +1,24 @@
model: anthropic/claude-3.7-sonnet:thinking
label:
en_US: Claude-3.7-sonnet(Thinking)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The label uses full-width parentheses (). For consistency with other labels, it's better to use standard half-width parentheses (). This ensures a uniform appearance across the UI.

  en_US: Claude-3.7-sonnet (Thinking)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected

@@ -0,0 +1,24 @@
model: anthropic/claude-sonnet-4:thinking
label:
en_US: Claude Sonnet 4 ( Thinking )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The label contains unnecessary spaces around the word inside the parentheses. For better formatting and consistency, these spaces should be removed.

  en_US: Claude Sonnet 4 (Thinking)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected

@@ -0,0 +1,24 @@
model: deepseek/deepseek-v3.1-think
label:
en_US: ' DeepSeek V3.1 Think'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The label has a leading space, which can cause inconsistent display and potential issues with string matching. It should be removed.

  en_US: DeepSeek V3.1 Think

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected

@@ -0,0 +1,23 @@
model: minimax/minimax-m2
label:
en_US: 'MiniMax M2 '
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The label has a trailing space. This should be removed for consistency and to prevent potential string comparison issues.

  en_US: MiniMax M2

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected

@@ -0,0 +1,24 @@
model: baidu/ernie-4.5-turbo-128k
label:
en_US: ' ERNIE-4.5-Turbo-128K'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The label has a leading space, which can cause inconsistent display and potential issues with string matching. It should be removed.

  en_US: ERNIE-4.5-Turbo-128K

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected

@@ -0,0 +1,23 @@
model: bigmodel/glm-z1-airx
label:
en_US: ​​GLM-Z1-AirX
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The label contains leading zero-width spaces (U+200B), which are invisible but can cause issues with string matching and rendering. These should be removed.

  en_US: GLM-Z1-AirX

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant