Skip to content

Update RunInference to work with model manager#37506

Merged
damccorm merged 9 commits intoapache:masterfrom
AMOOOMA:runinference
Feb 9, 2026
Merged

Update RunInference to work with model manager#37506
damccorm merged 9 commits intoapache:masterfrom
AMOOOMA:runinference

Conversation

@AMOOOMA
Copy link
Contributor

@AMOOOMA AMOOOMA commented Feb 4, 2026

Update RunInference to optionally run with the model manager that automatically keep GPU memory under control.
Renamed the original model manager to ModelHandlerManager which is more aligned to its function.

Code tested in staging and WAI, it's also defaulted to false so it should be safe.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@github-actions github-actions bot added the python label Feb 4, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @AMOOOMA, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the RunInference transform by introducing an optional model manager. This manager is designed to automatically handle GPU memory, making the inference process more efficient, especially when dealing with multiple or large models. The changes include renaming an existing internal class for better semantic clarity, updating the core RunInference DoFn to accept and utilize the new manager, and adding comprehensive tests to ensure its correct operation, particularly in GPU environments.

Highlights

  • Model Manager Integration: The RunInference transform now supports an optional model manager to automatically control GPU memory, improving resource utilization for inference tasks.
  • Refactoring and Renaming: The internal _ModelManager class has been renamed to _ModelHandlerManager to more accurately reflect its role in managing model handlers rather than raw models, and its docstring has been updated for clarity.
  • Enhanced Model Loading: A new _ProxyLoader class has been introduced to facilitate the wrapping of model loaders for MultiProcessShared, ensuring proper model management when the model manager is active.
  • Updated _SharedModelWrapper: The _SharedModelWrapper has been modified to integrate with the new model manager, allowing it to acquire and release models through the manager, which is crucial for GPU memory control.
  • New Integration Tests: New integration tests have been added, specifically for GPU-based HuggingFace models, to validate the functionality of RunInference when using the model manager, including scenarios with large inputs, large models, and parallel inference branches.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • sdks/python/apache_beam/ml/inference/base.py
    • Imported ModelManager and added ImportError handling.
    • Renamed _ModelManager to _ModelHandlerManager and updated its docstring.
    • Updated type hints and class references from _ModelManager to _ModelHandlerManager in KeyedModelHandler and related methods.
    • Added use_model_manager and model_manager_args parameters to RunInference.__init__ and _RunInferenceDoFn.__init__.
    • Introduced _ProxyLoader class to wrap model loaders for MultiProcessShared.
    • Modified _SharedModelWrapper to conditionally use the ModelManager for model acquisition and release.
    • Updated _RunInferenceDoFn._load_model to initialize and use the ModelManager based on the use_model_manager flag.
    • Adjusted _RunInferenceDoFn._run_inference to acquire and release models via the ModelManager when enabled.
  • sdks/python/apache_beam/ml/inference/base_test.py
    • Imported the multiprocessing module.
    • Renamed test methods related to _ModelManager to _ModelHandlerManager (e.g., test_model_manager_loads_shared_model to test_model_handler_manager_loads_shared_model).
    • Updated instantiations of base._ModelManager to base._ModelHandlerManager in existing tests.
    • Added new test cases for RunInference with use_model_manager=True, including tests for passing model_manager_args.
    • Introduced SimpleFakeModelHanlder and try_import_model_manager helper functions.
    • Added ModelManagerTest class with a tearDown method to clean up multiprocessing children.
  • sdks/python/apache_beam/ml/inference/model_manager_it_test.py
    • Added a new integration test file for the model manager.
    • Included HuggingFaceGpuTest class with tests for RunInference using use_model_manager=True on GPU.
    • Tests cover sentiment analysis with large inputs, large RoBERTa models, and parallel inference branches.
    • Implemented unittest.skipIf to skip GPU-specific tests if CUDA is not available or if ModelManager cannot be imported.
Activity
  • The pull request was authored by AMOOOMA with the goal of updating RunInference to integrate with a model manager for GPU memory control.
  • The description highlights the optional nature of the model manager and the renaming of _ModelManager to ModelHandlerManager for better alignment with its function.
  • The author included a standard contribution checklist.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link

codecov bot commented Feb 5, 2026

Codecov Report

❌ Patch coverage is 23.25581% with 33 lines in your changes missing coverage. Please review.
✅ Project coverage is 35.87%. Comparing base (4e218f0) to head (df07ad7).
⚠️ Report is 9 commits behind head on master.

Files with missing lines Patch % Lines
sdks/python/apache_beam/ml/inference/base.py 23.25% 33 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             master   #37506    +/-   ##
==========================================
  Coverage     35.87%   35.87%            
  Complexity     1676     1676            
==========================================
  Files          1063     1063            
  Lines        166481   166596   +115     
  Branches       1208     1208            
==========================================
+ Hits          59722    59765    +43     
- Misses       104578   104650    +72     
  Partials       2181     2181            
Flag Coverage Δ
python 39.69% <23.25%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@AMOOOMA AMOOOMA marked this pull request as ready for review February 6, 2026 01:42
@github-actions
Copy link
Contributor

github-actions bot commented Feb 6, 2026

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

@AMOOOMA
Copy link
Contributor Author

AMOOOMA commented Feb 6, 2026

R: @damccorm

@damccorm
Copy link
Contributor

damccorm commented Feb 6, 2026

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an optional ModelManager to RunInference for better GPU memory management, which is a great addition. The existing _ModelManager has been aptly renamed to _ModelHandlerManager for clarity. The changes are well-structured, backward-compatible by defaulting the new feature to off, and are accompanied by solid unit and integration tests. I have one minor suggestion to correct a typo in a test class name.

Copy link
Contributor

@damccorm damccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks - LGTM once tests complete

@github-actions
Copy link
Contributor

github-actions bot commented Feb 6, 2026

Assigning reviewers:

R: @damccorm for label python.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@damccorm damccorm merged commit 6602f1e into apache:master Feb 9, 2026
101 of 103 checks passed
model_metadata_pcoll: beam.PCollection[ModelMetadata] = None,
watch_model_pattern: Optional[str] = None,
model_identifier: Optional[str] = None,
use_model_manager: bool = False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @AMOOOMA , would you mind following up on this PR to add docstrings to the new RunInference params added here? Thanks a lot!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, it sounds like we might want to feature this functionality in CHANGES.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants