Skip to content

Conversation

@jacob-morrison
Copy link
Contributor

Add --log_generator_metrics flag to enable wandb logging from the first vLLM generator (engine_id=0) to collect GPU power, memory, and utilization metrics. When enabled, uses wandb groups to link the trainer and generator runs in the UI with -trainer and -generator suffixes.

Add --log_generator_metrics flag to enable wandb logging from the first
vLLM generator (engine_id=0) to collect GPU power, memory, and utilization
metrics. When enabled, uses wandb groups to link the trainer and generator
runs in the UI with -trainer and -generator suffixes.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jacob-morrison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces an optional feature to log system metrics (GPU power, memory, and utilization) from the vLLM generator processes to Weights & Biases (W&B). By adding a dedicated flag and configuring W&B runs to be grouped and distinctly named, users can now monitor GPU performance metrics from the generator alongside the main training run, providing a more comprehensive view of resource utilization during large language model operations.

Highlights

  • New Feature Flag: Introduced a new --log_generator_metrics flag to enable optional Weights & Biases (W&B) system metrics logging for vLLM generator processes.
  • W&B Run Management: Modified W&B initialization to create distinct runs for the trainer and generator, linked by a common group, and appended with -trainer and -generator suffixes respectively, improving experiment organization.
  • Generator Metrics Collection: Enabled the first vLLM generator (engine_id=0) to initialize its own W&B run for collecting GPU power, memory, and utilization metrics, providing deeper insights into generator performance.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an optional Weights & Biases (W&B) system metrics logging for the first vLLM generator process, controlled by the --log_generator_metrics flag. The implementation correctly sets up conditional W&B initialization, passes the necessary configuration through the call stack, and ensures that W&B is initialized appropriately within the Ray actor environment. The use of wandb.Settings(start_method="thread") is a good practice for this setup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants