Skip to content

Scope fixed-size fixture to class; exercise distinct bins in binning tests#2291

Merged
rapids-bot[bot] merged 3 commits intorapidsai:mainfrom
bdice:scope-fixed-size-and-binning-test-fixtures
Mar 16, 2026
Merged

Scope fixed-size fixture to class; exercise distinct bins in binning tests#2291
rapids-bot[bot] merged 3 commits intorapidsai:mainfrom
bdice:scope-fixed-size-and-binning-test-fixtures

Conversation

@bdice
Copy link
Collaborator

@bdice bdice commented Mar 12, 2026

Description

Scope the FixedSizeMemoryResource fixture to the test class so the resource is created once per upstream rather than once per (dtype, nelem, alloc) combination, matching the pattern from #2284.

Rework test_binning_memory_resource so allocations span multiple distinct bins. The previous bin range (2^18–2^22) pre-allocated ~992 MiB of managed memory while test data never exceeded 1 KiB — every allocation landed in the same bin. The new range (2^10–2^17) creates bins from 1 KiB to 128 KiB, and each _BINNING_NELEMS value routes to a different fixed-size bin with float64. An explicit 128 MiB CudaMemoryResource bin and a dedicated test_binning_large_allocation exercise the large-allocation path.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

…tests

Scope the FixedSizeMemoryResource fixture to the test class so
the resource is created once per upstream rather than once per
(dtype, nelem, alloc) combination, matching the pattern established
for BinningMemoryResource in rapidsai#2284.

Rework the binning test so allocations actually span multiple bins.
The previous bin range (2^18-2^22) pre-allocated ~992 MiB of managed
memory while test data never exceeded 1 KiB, so every allocation
landed in the same bin.  Shrink the auto-created range to
2^10-2^17 (1 KiB-128 KiB) so the six _BINNING_NELEMS values each
route to a distinct fixed-size bin with float64.  Add an explicit
128 MiB CudaMemoryResource bin and a dedicated test_binning_large_allocation
that copies 128 MiB of data through it.
@bdice bdice requested a review from a team as a code owner March 12, 2026 17:03
@bdice bdice added non-breaking Non-breaking change improvement Improvement / enhancement to an existing function labels Mar 12, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 12, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: becec996-4b40-45ac-a69c-c80065fcc285

📥 Commits

Reviewing files that changed from the base of the PR and between 20824fe and a4964cc.

📒 Files selected for processing (1)
  • python/REVIEW_GUIDELINES.md

📝 Walkthrough

Summary by CodeRabbit

  • Tests

    • Restructured binning and fixed-size memory resource tests with improved organization and class-scoped fixtures.
    • Extended test coverage to include large memory allocation scenarios.
    • Enhanced test parametrization for upstream memory resource variants.
  • Chores

    • Updated testing guidelines to support test classes for appropriate use cases.

Walkthrough

Updates tests for RMM Python memory resources: reconfigures BinningMemoryResource bin sizes and adds a large-allocation test; refactors FixedSizeMemoryResource tests to use class-scoped fixtures for upstream resource construction and per-class setup.

Changes

Cohort / File(s) Summary
Binning Memory Resource Tests
python/rmm/rmm/tests/test_binning_memory_resource.py
Adds numpy usage and new constants (_BINNING_NELEMS, _LARGE_NELEM); reworks BinningMemoryResource bin map to include small fixed bins (1 KiB–128 KiB), replaces an auto-bin with a 1 KiB fixed bin, and adds an explicit 128 MiB CudaMemoryResource bin; expands upstream PoolMemoryResource footprint to 128 MiB; updates parametrization and adds test_binning_large_allocation.
Fixed Size Memory Resource Tests
python/rmm/rmm/tests/test_fixed_size_memory_resource.py
Replaces per-test upstream parametrization with class-scoped fixture pattern: introduces _make_fixed_size_mr and _UPSTREAMS factories, adds TestFixedSizeMemoryResource class with autouse device fixture and fixed_size_mr class fixture, and consolidates tests into a parameterized class method.
Contributing / Guidelines
python/REVIEW_GUIDELINES.md
Relaxed guidance to allow test classes when needed for class-scoped fixture sharing (adjusts recommendation away from discouraging test classes).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Suggested reviewers

  • TomAugspurger
  • Matt711
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title directly and accurately summarizes the two main changes: scoping the fixed-size fixture to class level and reworking binning tests to exercise distinct bins.
Description check ✅ Passed The description provides detailed context for both main changes, explains the problem with the previous approach, and justifies the solution with specific technical details.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can customize the tone of the review comments and chat replies.

Configure the tone_instructions setting to customize the tone of the review comments and chat replies. For example, you can set the tone to Act like a strict teacher, Act like a pirate and more.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
python/rmm/rmm/tests/test_binning_memory_resource.py (1)

63-66: ⚠️ Potential issue | 🟠 Major

Make _apply_mr a generator fixture with proper resource restoration.

The fixture at lines 63-66 sets the device resource but never restores it. Since _apply_mr is autouse and function-scoped with a class-scoped binning_mr, each test overwrites the global device resource without cleanup, causing state leakage between tests. Convert the fixture to use yield with a try/finally block to restore the previous resource, following the pattern already established in rmm.statistics.statistics():

Suggested fix
 class TestBinningMemoryResource:
     `@pytest.fixture`(autouse=True)
     def _apply_mr(self, binning_mr):
         """Set the device resource before each test."""
-        rmm.mr.set_current_device_resource(binning_mr)
+        previous = rmm.mr.get_current_device_resource()
+        rmm.mr.set_current_device_resource(binning_mr)
+        try:
+            yield
+        finally:
+            rmm.mr.set_current_device_resource(previous)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/rmm/rmm/tests/test_binning_memory_resource.py` around lines 63 - 66,
The _apply_mr fixture currently sets the device resource via
rmm.mr.set_current_device_resource(binning_mr) but never restores the previous
resource; change _apply_mr into a generator (yield) fixture that saves the
current resource before setting binning_mr, yields to run the test, and in a
finally block restores the saved resource using
rmm.mr.set_current_device_resource(previous), mirroring the yield+try/finally
pattern used in rmm.statistics.statistics(); ensure the fixture remains autouse
so each test temporarily sets binning_mr and always restores the original device
resource afterward.
🧹 Nitpick comments (1)
python/rmm/rmm/tests/test_binning_memory_resource.py (1)

71-85: Add a release check around the shared binning_mr fixture.

Because binning_mr is now class-scoped, one leaked allocation in a single (dtype, nelem, alloc) case can contaminate later cases. A small before/after assertion that memory returns to baseline would make this change much safer.

Based on learnings "Test allocation/deallocation pairs comprehensively - verify memory is released after use".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/rmm/rmm/tests/test_binning_memory_resource.py` around lines 71 - 85,
Record the device memory baseline before calling array_tester and assert it
returns to that baseline after each test that uses the shared class-scoped
binning_mr fixture: in test_binning_memory_resource and
test_binning_large_allocation (and any other tests using binning_mr) call
rmm.cuda.current_context().get_memory_info() (or the project’s canonical rmm
memory-info helper) to capture baseline_used before array_tester(...), run
array_tester(dtype, nelem, alloc) / array_tester(np.float64, _LARGE_NELEM,
alloc), then assert the returned memory usage equals baseline_used to ensure all
allocations from binning_mr were released. Ensure this check runs around each
array_tester invocation so a leaked allocation in one parametrized case cannot
contaminate later cases.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@python/rmm/rmm/tests/test_fixed_size_memory_resource.py`:
- Around line 47-63: Remove the TestFixedSizeMemoryResource class wrapper and
convert the contained test into a standalone function named
test_fixed_size_memory_resource that accepts the fixed_size_mr fixture (retain
the existing pytest.mark.parametrize decorators for "dtype", "nelem", and
"alloc"); drop the class-level autouse fixture _apply_mr and instead rely on the
existing fixed_size_mr fixture (or make that fixture module-scoped/autouse if
you need one resource per module) to call rmm.mr.set_current_device_resource;
keep the assertions using rmm.mr.get_current_device_resource_type() is
rmm.mr.FixedSizeMemoryResource and call array_tester(dtype, nelem, alloc) inside
the standalone function.
- Around line 48-52: The autouse fixture _apply_mr currently sets the device
resource with rmm.mr.set_current_device_resource(fixed_size_mr) but does not
restore the previous global resource; change it to a yield-style fixture that
captures the prior resource (e.g., prev = rmm.mr.get_current_device_resource()),
sets the new one, yields to run the test, and then restores the prior resource
with rmm.mr.set_current_device_resource(prev) in the teardown so the resource
lifecycle is self-contained and mirrors the pattern used in rmm.statistics.

---

Outside diff comments:
In `@python/rmm/rmm/tests/test_binning_memory_resource.py`:
- Around line 63-66: The _apply_mr fixture currently sets the device resource
via rmm.mr.set_current_device_resource(binning_mr) but never restores the
previous resource; change _apply_mr into a generator (yield) fixture that saves
the current resource before setting binning_mr, yields to run the test, and in a
finally block restores the saved resource using
rmm.mr.set_current_device_resource(previous), mirroring the yield+try/finally
pattern used in rmm.statistics.statistics(); ensure the fixture remains autouse
so each test temporarily sets binning_mr and always restores the original device
resource afterward.

---

Nitpick comments:
In `@python/rmm/rmm/tests/test_binning_memory_resource.py`:
- Around line 71-85: Record the device memory baseline before calling
array_tester and assert it returns to that baseline after each test that uses
the shared class-scoped binning_mr fixture: in test_binning_memory_resource and
test_binning_large_allocation (and any other tests using binning_mr) call
rmm.cuda.current_context().get_memory_info() (or the project’s canonical rmm
memory-info helper) to capture baseline_used before array_tester(...), run
array_tester(dtype, nelem, alloc) / array_tester(np.float64, _LARGE_NELEM,
alloc), then assert the returned memory usage equals baseline_used to ensure all
allocations from binning_mr were released. Ensure this check runs around each
array_tester invocation so a leaked allocation in one parametrized case cannot
contaminate later cases.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c22cb994-2fa5-47cf-bda8-8e1b467d80de

📥 Commits

Reviewing files that changed from the base of the PR and between 22d812c and 20824fe.

📒 Files selected for processing (2)
  • python/rmm/rmm/tests/test_binning_memory_resource.py
  • python/rmm/rmm/tests/test_fixed_size_memory_resource.py

@bdice
Copy link
Collaborator Author

bdice commented Mar 16, 2026

/merge

@bdice bdice moved this to In Progress in RMM Project Board Mar 16, 2026
@rapids-bot rapids-bot bot merged commit 22f4680 into rapidsai:main Mar 16, 2026
75 checks passed
@github-project-automation github-project-automation bot moved this from In Progress to Done in RMM Project Board Mar 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improvement / enhancement to an existing function non-breaking Non-breaking change

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants