Scope fixed-size fixture to class; exercise distinct bins in binning tests#2291
Conversation
…tests Scope the FixedSizeMemoryResource fixture to the test class so the resource is created once per upstream rather than once per (dtype, nelem, alloc) combination, matching the pattern established for BinningMemoryResource in rapidsai#2284. Rework the binning test so allocations actually span multiple bins. The previous bin range (2^18-2^22) pre-allocated ~992 MiB of managed memory while test data never exceeded 1 KiB, so every allocation landed in the same bin. Shrink the auto-created range to 2^10-2^17 (1 KiB-128 KiB) so the six _BINNING_NELEMS values each route to a distinct fixed-size bin with float64. Add an explicit 128 MiB CudaMemoryResource bin and a dedicated test_binning_large_allocation that copies 128 MiB of data through it.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughSummary by CodeRabbit
WalkthroughUpdates tests for RMM Python memory resources: reconfigures BinningMemoryResource bin sizes and adds a large-allocation test; refactors FixedSizeMemoryResource tests to use class-scoped fixtures for upstream resource construction and per-class setup. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment Tip You can customize the tone of the review comments and chat replies.Configure the |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
python/rmm/rmm/tests/test_binning_memory_resource.py (1)
63-66:⚠️ Potential issue | 🟠 MajorMake
_apply_mra generator fixture with proper resource restoration.The fixture at lines 63-66 sets the device resource but never restores it. Since
_apply_mris autouse and function-scoped with a class-scopedbinning_mr, each test overwrites the global device resource without cleanup, causing state leakage between tests. Convert the fixture to useyieldwith a try/finally block to restore the previous resource, following the pattern already established inrmm.statistics.statistics():Suggested fix
class TestBinningMemoryResource: `@pytest.fixture`(autouse=True) def _apply_mr(self, binning_mr): """Set the device resource before each test.""" - rmm.mr.set_current_device_resource(binning_mr) + previous = rmm.mr.get_current_device_resource() + rmm.mr.set_current_device_resource(binning_mr) + try: + yield + finally: + rmm.mr.set_current_device_resource(previous)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/rmm/rmm/tests/test_binning_memory_resource.py` around lines 63 - 66, The _apply_mr fixture currently sets the device resource via rmm.mr.set_current_device_resource(binning_mr) but never restores the previous resource; change _apply_mr into a generator (yield) fixture that saves the current resource before setting binning_mr, yields to run the test, and in a finally block restores the saved resource using rmm.mr.set_current_device_resource(previous), mirroring the yield+try/finally pattern used in rmm.statistics.statistics(); ensure the fixture remains autouse so each test temporarily sets binning_mr and always restores the original device resource afterward.
🧹 Nitpick comments (1)
python/rmm/rmm/tests/test_binning_memory_resource.py (1)
71-85: Add a release check around the sharedbinning_mrfixture.Because
binning_mris now class-scoped, one leaked allocation in a single(dtype, nelem, alloc)case can contaminate later cases. A small before/after assertion that memory returns to baseline would make this change much safer.Based on learnings "Test allocation/deallocation pairs comprehensively - verify memory is released after use".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/rmm/rmm/tests/test_binning_memory_resource.py` around lines 71 - 85, Record the device memory baseline before calling array_tester and assert it returns to that baseline after each test that uses the shared class-scoped binning_mr fixture: in test_binning_memory_resource and test_binning_large_allocation (and any other tests using binning_mr) call rmm.cuda.current_context().get_memory_info() (or the project’s canonical rmm memory-info helper) to capture baseline_used before array_tester(...), run array_tester(dtype, nelem, alloc) / array_tester(np.float64, _LARGE_NELEM, alloc), then assert the returned memory usage equals baseline_used to ensure all allocations from binning_mr were released. Ensure this check runs around each array_tester invocation so a leaked allocation in one parametrized case cannot contaminate later cases.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@python/rmm/rmm/tests/test_fixed_size_memory_resource.py`:
- Around line 47-63: Remove the TestFixedSizeMemoryResource class wrapper and
convert the contained test into a standalone function named
test_fixed_size_memory_resource that accepts the fixed_size_mr fixture (retain
the existing pytest.mark.parametrize decorators for "dtype", "nelem", and
"alloc"); drop the class-level autouse fixture _apply_mr and instead rely on the
existing fixed_size_mr fixture (or make that fixture module-scoped/autouse if
you need one resource per module) to call rmm.mr.set_current_device_resource;
keep the assertions using rmm.mr.get_current_device_resource_type() is
rmm.mr.FixedSizeMemoryResource and call array_tester(dtype, nelem, alloc) inside
the standalone function.
- Around line 48-52: The autouse fixture _apply_mr currently sets the device
resource with rmm.mr.set_current_device_resource(fixed_size_mr) but does not
restore the previous global resource; change it to a yield-style fixture that
captures the prior resource (e.g., prev = rmm.mr.get_current_device_resource()),
sets the new one, yields to run the test, and then restores the prior resource
with rmm.mr.set_current_device_resource(prev) in the teardown so the resource
lifecycle is self-contained and mirrors the pattern used in rmm.statistics.
---
Outside diff comments:
In `@python/rmm/rmm/tests/test_binning_memory_resource.py`:
- Around line 63-66: The _apply_mr fixture currently sets the device resource
via rmm.mr.set_current_device_resource(binning_mr) but never restores the
previous resource; change _apply_mr into a generator (yield) fixture that saves
the current resource before setting binning_mr, yields to run the test, and in a
finally block restores the saved resource using
rmm.mr.set_current_device_resource(previous), mirroring the yield+try/finally
pattern used in rmm.statistics.statistics(); ensure the fixture remains autouse
so each test temporarily sets binning_mr and always restores the original device
resource afterward.
---
Nitpick comments:
In `@python/rmm/rmm/tests/test_binning_memory_resource.py`:
- Around line 71-85: Record the device memory baseline before calling
array_tester and assert it returns to that baseline after each test that uses
the shared class-scoped binning_mr fixture: in test_binning_memory_resource and
test_binning_large_allocation (and any other tests using binning_mr) call
rmm.cuda.current_context().get_memory_info() (or the project’s canonical rmm
memory-info helper) to capture baseline_used before array_tester(...), run
array_tester(dtype, nelem, alloc) / array_tester(np.float64, _LARGE_NELEM,
alloc), then assert the returned memory usage equals baseline_used to ensure all
allocations from binning_mr were released. Ensure this check runs around each
array_tester invocation so a leaked allocation in one parametrized case cannot
contaminate later cases.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c22cb994-2fa5-47cf-bda8-8e1b467d80de
📒 Files selected for processing (2)
python/rmm/rmm/tests/test_binning_memory_resource.pypython/rmm/rmm/tests/test_fixed_size_memory_resource.py
|
/merge |
Description
Scope the
FixedSizeMemoryResourcefixture to the test class so the resource is created once per upstream rather than once per(dtype, nelem, alloc)combination, matching the pattern from #2284.Rework
test_binning_memory_resourceso allocations span multiple distinct bins. The previous bin range(2^18–2^22)pre-allocated ~992 MiB of managed memory while test data never exceeded 1 KiB — every allocation landed in the same bin. The new range(2^10–2^17)creates bins from 1 KiB to 128 KiB, and each_BINNING_NELEMSvalue routes to a different fixed-size bin withfloat64. An explicit 128 MiBCudaMemoryResourcebin and a dedicatedtest_binning_large_allocationexercise the large-allocation path.Checklist