-
Notifications
You must be signed in to change notification settings - Fork 55
feat: swap out internal device array usage with StridedMemoryView
#703
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
f5a1c5c to
c62d013
Compare
|
/ok to test |
Greptile Summary
Important Files Changed
Confidence score: 3/5
|
|
This PR can't be merged until the next release of cuda-core, because I depend on some unreleased features there. However, it's still worth reviewing. |
|
I managed to recover a good amount of perf of devicearray by avoiding the SMV conversion entirely and spoofing the interface. |
|
However there is still a slowdown of ~60%, but only in the many-args case (it's about 15% in the single argument case). This is much better than the previous commit which was upwards of 2.5x. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greptile Overview
Greptile Summary
Refactored kernel argument handling to use StridedMemoryView internally, enabling direct __dlpack__ protocol support and improving CuPy interoperability (~3x speedup).
Key Changes
- Replaced
auto_device()calls with_to_strided_memory_view()for unified array handling - Added LRU caching to type inference functions (
typeof,from_dtype,strides_from_shape) to reduce overhead - Converted several properties to
@functools.cached_propertyfor performance - Refactored
Out/InOutclasses to use inheritance pattern withcopy_inputclass variable - Changed
strides_from_shape()API fromorder="C"/"F"to boolean flagsc_contiguous/f_contiguous
Issues Found
- Logic bug in
strides_from_shape(): when bothc_contiguousandf_contiguousare False, function produces incorrect strides (computes F-contiguous then reverses, which is neither C nor F layout)
Performance Trade-offs
The PR documents a ~2.5x regression for legacy device_array() in exchange for ~3x improvement for CuPy arrays. This aligns with the project's strategic direction toward ecosystem integration.
Confidence Score: 4/5
- This PR is safe to merge with one logic issue that needs fixing
- Score reflects well-structured refactoring with proper caching optimizations, but one critical logic bug in
strides_from_shape()when both contiguity flags are False needs resolution before merge numba_cuda/numba/cuda/np/numpy_support.py- fix thestrides_from_shape()logic for handling non-contiguous arrays
Important Files Changed
File Analysis
| Filename | Score | Overview |
|---|---|---|
| numba_cuda/numba/cuda/np/numpy_support.py | 3/5 | Added LRU caching to strides_from_shape and from_dtype; changed API from order parameter to c_contiguous/f_contiguous flags. Logic issue: when both flags are False, function computes F-contiguous strides then reverses them unexpectedly. |
| numba_cuda/numba/cuda/cudadrv/devicearray.py | 4/5 | Added _to_strided_memory_view and _make_strided_memory_view helper functions to support DLPack protocol; converted nbytes and added _strided_memory_view_shim to cached properties. Implementation looks solid. |
| numba_cuda/numba/cuda/args.py | 4/5 | Refactored Out and InOut classes to use StridedMemoryView; changed _numba_type_ to cached property. Clean refactor with proper class inheritance. |
| numba_cuda/numba/cuda/dispatcher.py | 4/5 | Updated kernel argument marshaling to work with StridedMemoryView objects instead of DeviceNDArray. Uses fallback to strides_from_shape when strides not available. |
| numba_cuda/numba/cuda/typing/typeof.py | 5/5 | Added LRU caching to _typeof_cuda_array_interface by extracting logic into cached helper functions. All parameters are hashable, caching is safe and should improve performance. |
| numba_cuda/numba/cuda/np/arrayobj.py | 5/5 | Updated call to strides_from_shape to use new keyword-only argument API with c_contiguous=True. Minimal, straightforward change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
6 files reviewed, 1 comment
9ff51b9 to
1032275
Compare
rparolin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks good to me. I'm a bit on the fence about shipping a known performance regression to a deprecated type. I'd feel better if we removed it first instead of regressing on performance. All that being said, the regression has improved from the initially reported 2.5x.
I'd still wait to merge until @gmarkall gives the final 👍
|
So the results of where we are is that using CuPy Arrays has ~3x less latency, but using device arrays or torch tensors has ~60% more latency? On the torch front NVIDIA/cuda-python#1439 may help in bypassing the slow |
Almost. Will post new numbers in a bit.
Passing |
1032275 to
739cb5b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greptile Overview
Greptile Summary
Refactors internal kernel argument handling to use StridedMemoryView from cuda-python, enabling direct __dlpack__ protocol support for external arrays like CuPy. Replaces __cuda_array_interface__ handling with the unified StridedMemoryView API and adds LRU caching to type inference paths to reduce overhead. Performance measurements show ~3x improvement for CuPy arrays but ~2.5x regression for legacy device_array() objects, which the PR justifies as an acceptable trade-off favoring ecosystem integration over deprecated APIs.
Confidence Score: 1/5
- Critical logic bug in strides fallback for 0-dimensional arrays will cause incorrect behavior
- The PR contains a critical logic error in dispatcher.py line 558 where the strides fallback uses
oroperator with potentially empty tuples. For 0-dimensional arrays,strides_in_bytesis legitimately(), but empty tuples are falsy in Python, triggering unnecessary fallback computation. While the fallback should also return(), this indicates a misunderstanding of the truthiness semantics that could mask other issues. Additionally, there are multiple stream handling edge cases around stream=0 that should be verified for correctness. - numba_cuda/numba/cuda/dispatcher.py requires immediate attention for the strides fallback bug; numba_cuda/numba/cuda/args.py needs verification of stream_ptr=0 handling semantics
Important Files Changed
File Analysis
| Filename | Score | Overview |
|---|---|---|
| numba_cuda/numba/cuda/dispatcher.py | 1/5 | Refactored kernel argument marshaling to use StridedMemoryView; critical bug in strides fallback logic |
| numba_cuda/numba/cuda/cudadrv/devicearray.py | 2/5 | Added _to_strided_memory_view and _make_strided_memory_view functions for dlpack/CAI conversion; changed nbytes to cached_property |
| numba_cuda/numba/cuda/args.py | 3/5 | Refactored Out and InOut classes to use _to_strided_memory_view; InOut now inherits from Out with copy_input=True |
|
I also added a benchmark demonstrating that the additional overhead with Maybe there's some way that we can reduce that further, but I haven't looked into it. |
|
The devicearray regressions are somewhat concerning, but given we are actively working towards deprecating it, I think it would still be worth it. Do we have a sense on follow up work from here that helps to ameliorate the performance overheads related to torch? |
At least some of the remaining overhead is related to stream synchronization, but that may be justified/useful in some cases I'm guessing. After that, I'm not sure. It will require more investigation. Just to make sure we're on the same page, our expectation is that if an array is on device then we should expect the kernel launch overhead to amount to a collection of relatively cheap attribute accesses. Is that correct? |
My 2c: numba-cuda shouldn't be in the business of handling stream synchronization and that if someone is passing an array on a different stream through dlpack / CAI, it becomes their responsibility to launch the kernel on a stream that is synchronized with respect to the passed stream. This is likely a breaking change that would need to be clearly and loudly deprecated and subsequently removed.
Yes. Kernel launch latency is quite important where we should aim for less than 1us overhead. |
Got it, yeah I don't really know enough about how this functionality is used or assumed to be used to have an informed opinion (yet!), but simply removing sync (in the torch case by passing -1 as the stream pointer to
Roger that, I think we can get there if not very close. |
739cb5b to
891ccb7
Compare
|
/ok to test |
e6cdd43 to
7d25566
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10 files reviewed, 1 comment
c478d8a to
8d1a12c
Compare
|
/ok to test |
1 similar comment
|
/ok to test |
|
From offline conversation: the torch benchmark may be creating torch tensors on a different stream than the CUDA default stream and causing stream synchronization. |
Traced through the pytorch The current implementation uses the default stream from torch and does NO synchronization. Whatever additional overhead there is appears to be from the various setup code and checks that happen before actually getting to the C API call that builds the DLPack capsule. |
becac34 to
d5864be
Compare
|
/ok to test |
d5864be to
2687a94
Compare
|
/ok to test |
|
The cudf tests are failing because their |
|
/ok to test |
1 similar comment
|
/ok to test |
|
I ended up trying to patch in the places where the version was 0 or 1 in cudf's |
|
/ok to test |
1 similar comment
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
11 files reviewed, 1 comment
|
/ok to test |
1 similar comment
|
/ok to test |
There is no external behavioural change expected from switching to use SMV, but using SMV from the simulator causes issues. To avoid this, we copy in the original args code into the simulator and use it there.
dd935d6 to
c862454
Compare
|
/ok to test |
c862454 to
e2a664c
Compare
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (1)
-
pyproject.toml, line 27-34 (link)logic: Inconsistent
cuda-coreversion requirements: main dependencies require>=0.5.1(line 20) but optionalcu12/cu13sections still specify older minimums (>=0.3.0and>=0.3.2). This could cause dependency resolution issues if someone installs withcu12orcu13extras and gets an oldercuda-coreversion that doesn't support the newStridedMemoryViewfunctionality.Should the cu12/cu13 optional dependency groups also require cuda-core>=0.5.1 to match the main dependencies?
15 files reviewed, 2 comments
| # TODO: remove the patch and its application after 26.02 is released | ||
| patchfile="${PWD}/ci/patches/cudf_numba_cuda_compatibility.patch" | ||
| pushd "$(python -c 'import site; print(site.getsitepackages()[0])')" | ||
| # strip 3 slahes to apply from the root of the install |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
syntax: typo in comment - 'slahes' should be 'slashes'
| # strip 3 slahes to apply from the root of the install | |
| # strip 3 slashes to apply from the root of the install |


Summary
Refactor kernel argument handling to use
StridedMemoryViewinternally,enabling direct support for
__dlpack__objects and improving interoperabilitywith libraries like CuPy.
Closes: #152
Tracking issue: #128
Key Changes
New capability: Kernel arguments now accept objects with
__dlpack__protocol directly (e.g., CuPy arrays).
Internals: Replaced array interface handling with
cuda.core.utils.StridedMemoryViewfor:__dlpack__objects (new)__cuda_array_interface__objectsDeviceNDArray)Performance:
device_array()arrays: ~2.5x regression (initial measurements)slow. Previously it was going through CAI but its CAI version isn't supported
by
StridedMemoryViewPerformance Trade-off Discussion
The 2.5x slowdown for
device_array()is worth discussing (and perhaps thetorch regression is as well):
Arguments for accepting this regression:
__dlpack__libraries represent the primary ecosystem (or atleast the end goal) for GPU computing in Python
that we are prioritizing
device_array()is primarily used in legacy code and tests and isdeprecated
Why this might be worth merging despite the regression:
the project's direction
it proves important
Implementation Details
_to_strided_memory_view()and_make_strided_memory_view()helperfunctions (numba_cuda/numba/cuda/cudadrv/devicearray.py:247-359)
typeoffor CAI objects to reduce type inferenceoverhead (typing/typeof.py:315-365)
Testing
Existing test suite passes.
TL;DR: Adds
__dlpack__support (~3x faster for CuPy), with ~2.5xregression on legacy
device_array(). Trade-off favors ecosystem integration,but open to discussion.