-
Notifications
You must be signed in to change notification settings - Fork 117
fix: use configurable max_tokens in credential validation instead of hardcoded value #260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…hardcoded value The `validate_credentials` method in `OAICompatLargeLanguageModel` uses a hardcoded `max_tokens: 5` value when testing model connectivity. This can cause credential validation failures for models that have minimum token requirements higher than 5, even when the credentials are correct.
Summary of ChangesHello @huayaoyue6, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue in the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix credential validation failures for models with minimum token requirements by making the max_tokens parameter configurable instead of hardcoded. The change correctly replaces the hardcoded value with a value from the credentials.
However, I've identified two issues:
- A critical issue where an invalid value for
max_tokens_to_samplein the credentials can crash the request handler due to an unhandledNameError. - A high-severity issue where the fix is incomplete, as the stream validation mode still uses a hardcoded
max_tokensvalue, which can lead to the same validation failures this PR intends to solve.
Please see the detailed comments in the code.
| # prepare the payload for a simple ping to the model | ||
| data = {"model": credentials.get("endpoint_model_name", model), "max_tokens": 5} | ||
| data = {"model": credentials.get("endpoint_model_name", model), | ||
| "max_tokens": int(credentials.get("max_tokens_to_sample", 20))} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This direct int() conversion is unsafe. If credentials.get('max_tokens_to_sample') returns a non-numeric string (e.g., from user configuration), this will raise a ValueError. This exception is caught by the generic except Exception block below, but the error message formatting at line 254 (f"... response body {response.text}") will then raise a NameError because the response variable has not been defined yet. This NameError is unhandled and will likely crash the request handler, providing a confusing stack trace to the user.
To fix this, you should validate the value and handle conversion errors gracefully before it's used. For example, you could use a try-except block around the conversion and raise a CredentialsValidateFailedError with a clear message if it fails.
| data = {"model": credentials.get("endpoint_model_name", model), | ||
| "max_tokens": int(credentials.get("max_tokens_to_sample", 20))} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this change makes max_tokens configurable for the default validation path, the value is unconditionally overridden with a hardcoded 10 on line 206 when stream validation is enabled (stream_mode_auth == 'use'). This means that for models requiring more than 10 tokens, stream validation will still fail, making this fix incomplete. To fully address the issue described in the pull request, the max_tokens for stream validation should also be configurable or use the value you've set here.
The
validate_credentialsmethod inOAICompatLargeLanguageModeluses a hardcodedmax_tokens: 5value when testing model connectivity. This can cause credential validation failures for models that have minimum token requirements higher than 5, even when the credentials are correct.Pull Request Checklist
Thank you for your contribution! Before submitting your PR, please make sure you have completed the following checks:
Compatibility Check
README.mdREADME.mdREADME.mdREADME.mdAvailable Checks