Skip to content

Conversation

@leslie2046
Copy link
Contributor

…king scenarios.
ref langgenius/dify-official-plugins#2195

Pull Request Checklist

Thank you for your contribution! Before submitting your PR, please make sure you have completed the following checks:

Compatibility Check

  • I have checked whether this change affects the backward compatibility of the plugin declared in README.md
  • I have checked whether this change affects the forward compatibility of the plugin declared in README.md
  • If this change introduces a breaking change, I have discussed it with the project maintainer and specified the release version in the README.md
  • I have described the compatibility impact and the corresponding version number in the PR description
  • I have checked whether the plugin version is updated in the README.md

Available Checks

  • Code has passed local tests
  • Relevant documentation has been updated (if necessary)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @leslie2046, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the dify_plugin's large language model interface by enhancing the logic for handling 'thinking' states. The primary goal is to ensure that multi-turn thinking scenarios are accurately represented with <think> and </think> tags, thereby improving the clarity and structural integrity of the model's reasoning output.

Highlights

  • Enhanced think tag handling: The logic for wrapping reasoning content with <think> and </think> tags within the _wrap_thinking_by_reasoning_content method has been significantly improved.
  • Improved Multi-turn Thinking Support: This update specifically addresses and supports multi-turn thinking scenarios by refining how the think closure tag is managed, ensuring correct state transitions and content encapsulation.
  • Refactored Content Concatenation: An output variable was introduced to streamline the concatenation of reasoning_content and content, ensuring precise tag placement and proper preservation of all content.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the logic for wrapping reasoning content by ensuring that the <think> tag is always properly closed with </think>. This fixes a potential bug in multi-turn thinking scenarios where the closing tag could be missed. The change is correct, and I've added one suggestion to refactor the new logic for improved clarity and conciseness.

Comment on lines 547 to 553
if is_reasoning:
is_reasoning = False
if not reasoning_content:
output = "\n</think>"
if content:
output += content

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for handling the end of a reasoning block can be simplified for better readability and conciseness. The check if not reasoning_content: is redundant because this code is inside the else block of if reasoning_content:, where reasoning_content is guaranteed to be falsy. Additionally, the output string construction can be simplified into a single line.

            if is_reasoning:
                is_reasoning = False
                output = "\n</think>" + content

@leslie2046 leslie2046 changed the title ENH: The addition of the think closure tag supports multi-turn thin… enh: The addition of the think closure tag supports multi-turn thin… Dec 22, 2025
@Yeuoly
Copy link
Collaborator

Yeuoly commented Dec 22, 2025

Would you mind add some tests for the changes?

@leslie2046
Copy link
Contributor Author

Would you mind add some tests for the changes?

Fine,I will add some tests tomorrow.

@leslie2046
Copy link
Contributor Author

leslie2046 commented Dec 23, 2025

provider:DeepSeek
model:deepseek-reasoner result

[Request 1] ...
R1 Done. Tool Calls: 1

[Request 2] ...
R2 Done.

================================================================================
OUTPUT VISUALIZATION
================================================================================

>>> Mode: OLD_LOGIC <<<

--- Human Readability (OLD_LOGIC) ---
AI: <think>
用户想知道北京的天气情况。我需要使用 get_weather 工具来获取北京天气信息。那么,我来调用这个函数。
[System: Tool Result used...]
AI: <think>
工具返回了北京的天气信息:晴天,气温25°C,湿度40%。我需要用中文回答。所以我可以这样回复:“北京今天天气晴朗,气温25°C,
湿度40%。”或者,也可以补充一句“天气不错”之类的。现在开始回复。
</think>北京今天天气晴朗,气温25°C,湿度40%。天气不错,适合外出活动。

================================================================================

>>> Mode: NEW_LOGIC <<<

--- Human Readability (NEW_LOGIC) ---
AI: <think>
用户想知道北京的天气情况。我需要使用 get_weather 工具来获取北京天气信息。那么,我来调用这个函数。
</think>
[System: Tool Result used...]
AI: <think>
工具返回了北京的天气信息:晴天,气温25°C,湿度40%。我需要用中文回答。所以我可以这样回复:“北京今天天气晴朗,气温25°C,
湿度40%。”或者,也可以补充一句“天气不错”之类的。现在开始回复。
</think>北京今天天气晴朗,气温25°C,湿度40%。天气不错,适合外出活动。

================================================================================

@leslie2046
Copy link
Contributor Author

provider:siliconflow
model:Pro/deepseek-ai/DeepSeek-V3.2


[Request 1] ...
R1 Done. Tool Calls: 1

[Request 2] ...
R2 Done.

================================================================================
OUTPUT VISUALIZATION
================================================================================

>>> Mode: OLD_LOGIC <<<

--- Human Readability (OLD_LOGIC) ---
AI: <think>
用户想知道北京的天气情况。我需要使用 get_weather 工具来获取北京的天气信息。我会调用这个函数。
</think>


[System: Tool Result used...]
AI: <think>
根据返回的信息,北京天气晴朗,气温25°C,湿度40%。我可以这样回复用户。
</think>目前北京的天气情况如下:

- **天气状况**:晴朗 ☀️
- **温度**:25°C
- **湿度**:40%

今天天气不错,适合外出活动。

================================================================================

>>> Mode: NEW_LOGIC <<<

--- Human Readability (NEW_LOGIC) ---
AI: <think>
用户想知道北京的天气情况。我需要使用 get_weather 工具来获取北京的天气信息。我会调用这个函数。
</think>


[System: Tool Result used...]
AI: <think>
根据返回的信息,北京天气晴朗,气温25°C,湿度40%。我可以这样回复用户。
</think>目前北京的天气情况如下:

- **天气状况**:晴朗 ☀️
- **温度**:25°C
- **湿度**:40%

今天天气不错,适合外出活动。

================================================================================

@leslie2046
Copy link
Contributor Author

leslie2046 commented Dec 23, 2025

provider:volcengine_maas
model:DeepSeek-V3.2


[Request 1] ...
R1 Done. Tool Calls: 1

[Request 2] ...
R2 Done.

================================================================================
OUTPUT VISUALIZATION
================================================================================

>>> Mode: OLD_LOGIC <<<

--- Human Readability (OLD_LOGIC) ---
AI: <think>
用户想知道北京的天气情况。我需要使用 get_weather 工具来获取北京的天气信息。那么,我来调用这个工具。
[System: Tool Result used...]
AI: <think>
根据天气工具返回的结果,北京天气晴朗,气温25摄氏度,湿度40%。我需要用中文回复用户,提供这些信息。可以这样说:“北京今天 
天气晴朗,气温25°C,湿度40%。” 现在开始回复。
</think>北京今天天气晴朗,气温25°C,湿度40%。

================================================================================

>>> Mode: NEW_LOGIC <<<

--- Human Readability (NEW_LOGIC) ---
AI: <think>
用户想知道北京的天气情况。我需要使用 get_weather 工具来获取北京的天气信息。那么,我来调用这个工具。
</think>
[System: Tool Result used...]
AI: <think>
根据天气工具返回的结果,北京天气晴朗,气温25摄氏度,湿度40%。我需要用中文回复用户,提供这些信息。可以这样说:“北京今天 
天气晴朗,气温25°C,湿度40%。” 现在开始回复。
</think>北京今天天气晴朗,气温25°C,湿度40%。

================================================================================

@leslie2046
Copy link
Contributor Author

@Yeuoly cc

Copy link
Collaborator

@Yeuoly Yeuoly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Yeuoly Yeuoly merged commit 2409ade into langgenius:main Dec 24, 2025
3 checks passed
@leslie2046 leslie2046 deleted the optimization branch December 25, 2025 00:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants