Skip to content

Conversation

@hamishivi
Copy link
Collaborator

@hamishivi hamishivi commented Jan 29, 2026

Fixes #1385. Now, we accumulate eval results across training steps. If we hit a new eval step but haven't yet cleared out partial results, we log a warning and just flush it out without logging.

The only thing I'm unsure about: the training step associated with the logging is the training step we actually log at. This is technically wrong since the eval samples can be from whenever, but I think its better than pausing all training for the eval samples. The main use case is tracking performance, so I think it should still work (we want to observe that more training steps -> eval scores increasing).

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hamishivi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a robust mechanism to handle asynchronous evaluation results in the grpo_fast.py module. Previously, a short timeout could lead to incomplete evaluation data being discarded. By implementing an EvalAccumulator, the system can now collect evaluation responses over several training steps, ensuring that all data is gathered before metrics are computed and logged. This significantly improves the reliability and accuracy of the evaluation process, especially in scenarios where generation times might vary.

Highlights

  • Evaluation Response Loss Fix: Resolved an issue (GitHub issue Eval responses lost on timeout in grpo_fast #1385) where evaluation responses were lost if a 10ms timeout was hit during evaluation in grpo_fast.py.
  • EvalAccumulator Introduction: Implemented a new EvalAccumulator class to persistently track and store partial evaluation results across multiple training steps, preventing data loss on timeouts.
  • Non-Blocking Result Collection: Modified the maybe_evaluate function to pull available evaluation results from the queue using a non-blocking approach (get_nowait()) and accumulate them over time.
  • Accurate Metric Logging: Ensured that evaluation metrics are only logged once all expected results have been received and are attributed to the training step where the evaluation was initiated, rather than where it completed.
  • Unit Tests for Accumulator: Added comprehensive unit tests for the EvalAccumulator class to verify its functionality, including initial state, reset behavior, completion checks, and duplicate handling.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses the issue of lost evaluation responses on timeout by introducing an EvalAccumulator to persist partial results across training steps. The changes are well-structured, with new helper functions for pulling and finalizing evaluation results, and the main training loop is updated to manage the accumulator's lifecycle. Unit tests for the new EvalAccumulator class are also included, ensuring its correctness. My main concerns are a bug in the backward compatibility path of maybe_evaluate, which could lead to silent evaluation failures, and a potential crash when processing an empty set of evaluation results. I've left specific comments with suggested fixes for these issues.

@hamishivi hamishivi force-pushed the fix-eval-timeout-1385 branch 15 times, most recently from bb8b5a3 to 1252dc1 Compare January 30, 2026 00:10
@hamishivi
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an issue where partial evaluation responses could be lost due to timeouts. The new approach of accumulating results in a list across evaluation steps is a solid improvement.

My review includes one high-severity comment on open_instruct/grpo_fast.py that identifies a bug in the new logic for the final evaluation step, where an incomplete evaluation would be skipped. I've also pointed out a related performance issue with the timeout handling and suggested a more robust implementation to address both points.

@hamishivi hamishivi force-pushed the fix-eval-timeout-1385 branch 6 times, most recently from 6e24a8d to 9aa9ebf Compare January 30, 2026 00:21
@hamishivi hamishivi force-pushed the fix-eval-timeout-1385 branch from 9aa9ebf to d4342d6 Compare January 30, 2026 23:42
@hamishivi hamishivi force-pushed the fix-eval-timeout-1385 branch from 3f08939 to 4d546fe Compare January 31, 2026 00:20
@hamishivi hamishivi force-pushed the fix-eval-timeout-1385 branch from 4d546fe to 172ff73 Compare January 31, 2026 00:21
@hamishivi hamishivi added this pull request to the merge queue Jan 31, 2026
Merged via the queue into main with commit 3dec8b9 Jan 31, 2026
7 checks passed
@hamishivi hamishivi deleted the fix-eval-timeout-1385 branch January 31, 2026 00:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Eval responses lost on timeout in grpo_fast

3 participants