Skip to content

Conversation

@Yeuoly
Copy link
Collaborator

@Yeuoly Yeuoly commented Dec 27, 2025

Motivation

  • Allow tuning the ping max_tokens used during credentials validation so providers with different behaviors can be supported.
  • Preserve the streaming validation behavior that historically used max_tokens=10 to ensure a token chunk is emitted while still permitting explicit overrides.
  • Keep project metadata up to date by bumping the SDK package version.

Description

  • Read validate_credentials_max_tokens from credentials and use it as the ping max_tokens for non-stream validation in python/dify_plugin/interfaces/model/openai_compatible/llm.py.
  • For stream-mode validation, keep the default at 10 but allow overriding when validate_credentials_max_tokens is explicitly provided, and add an inline comment explaining the rationale.
  • Bump SDK version to 0.7.1 in python/pyproject.toml and update python/pdm.lock accordingly, and apply small lint/format adjustments in llm.py.

Testing

  • Ran ./scripts/fix.sh which executed pdm run ruff format and pdm run ruff check --fix and completed successfully with one auto-fix applied.
  • No additional automated test suite was executed for this change.

Codex Task

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Yeuoly, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the flexibility of credential validation by introducing a configurable max_tokens parameter, which is crucial for accommodating the diverse behaviors of different LLM providers. It also refines the streaming validation logic to maintain consistency while allowing custom overrides, and updates the project's SDK version.

Highlights

  • Configurable max_tokens for Credential Validation: The max_tokens parameter used during credentials validation is now configurable via validate_credentials_max_tokens in the credentials dictionary, allowing better support for LLM providers with varying behaviors.
  • Streaming Validation Behavior Refinement: The streaming validation behavior maintains its historical max_tokens=10 default to ensure a token chunk is emitted, but now permits explicit overrides if validate_credentials_max_tokens is provided in the credentials.
  • SDK Version Bump: The Dify Plugin SDK version has been updated from 0.7.0 to 0.7.1 to keep project metadata current.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request makes the max_tokens parameter for credential validation configurable and bumps the SDK version. The intent is good, but the implementation for retrieving the configurable value has a subtle bug. The use of or to provide a default value incorrectly overrides a legitimate value of 0. I've provided review comments with suggestions to fix this issue by explicitly checking for None.


# prepare the payload for a simple ping to the model
data = {"model": credentials.get("endpoint_model_name", model), "max_tokens": 5}
validate_credentials_max_tokens = credentials.get("validate_credentials_max_tokens", 5) or 5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The use of or 5 can lead to unexpected behavior. If a user provides validate_credentials_max_tokens: 0, this expression will evaluate to 5, which is likely not the intended behavior as 0 can be a valid value for some APIs.

A more robust way to handle this is to explicitly check for None before applying the default. This can be done concisely using an assignment expression (walrus operator), which is supported by your target Python version (>=3.11).

Suggested change
validate_credentials_max_tokens = credentials.get("validate_credentials_max_tokens", 5) or 5
validate_credentials_max_tokens = v if (v := credentials.get("validate_credentials_max_tokens")) is not None else 5

# ADD stream validate_credentials
stream_mode_auth = credentials.get("stream_mode_auth", "not_use")
if stream_mode_auth == "use":
stream_validate_max_tokens = credentials.get("validate_credentials_max_tokens") or 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This line has the same issue as in the non-streaming case. Using or 10 will incorrectly override an explicit setting of validate_credentials_max_tokens: 0 to 10.

To handle this correctly while keeping it concise, you can use an assignment expression to check for None before applying the default value.

Suggested change
stream_validate_max_tokens = credentials.get("validate_credentials_max_tokens") or 10
stream_validate_max_tokens = v if (v := credentials.get("validate_credentials_max_tokens")) is not None else 10

@Yeuoly Yeuoly closed this Dec 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants