Include ROCM support for CUDA extensions#12
Merged
amd-sriram merged 12 commits intomainfrom Feb 23, 2026
Merged
Conversation
…ip rocm so it is the same as upstream
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
Port cuda extensions to ROCm:
Technical Details
Hipification process
tools/setup_helpers/extension.py returns the list of extensions to be used in setup() call.
Hipification of cuda sources is performed by passing DHIPBLAS_V2 flag to the C++ and HIPCC compilers (via cxx and nvcc) flags to either torch's CppExtension or CudaExtension. This approach is also used in rocm/apex: https://github.com/ROCm/apex/blob/release/1.9.0/setup.py#L231.
In addition, cuda source files are added for _USE_ROCM flag.
e.g.
if _USE_CUDA or _USE_ROCM:
sources.append("iir_cuda.cu")
Fixing porting issues
The following changes have been made to fix the following errors:
1. TORCH_HIP_VERSION is not defined
TORCH_HIP_VERSION is defined in tools/setup_helpers/extension.py , similiar to ttps://github.com/ROCm/pytorch/blob/develop/cmake/public/LoadHIP.cmake#L166
math(EXPR TORCH_HIP_VERSION "(${HIP_VERSION_MAJOR} * 100) + ${HIP_VERSION_MINOR}")2. hip namespace not defined
Function defined in src/libtorchaudio/cuda_utils.h
namespace libtorchaudio::cuda is not hipified in this file, but it gets hipified in the call:
options.stream_ = libtorchaudio::cuda::getCurrentCUDAStream(logits.get_device_index());becomes
options.stream_ = libtorchaudio::hip::getCurrentHIPStreamMasqueradingAsCUDA(logits.get_device_index());Create hip_namespace_shim that maps the functions provided in src/libtorchaudio/cuda_utils.h.
This file is included in the following cuda source files:
3. kernel launch parameters are not proper
Correct the parameters in THO_DISPATCH_V2 based on https://github.com/ROCm/pytorch/blob/develop/test/cpp_extensions/libtorch_agn_2_9_extension/csrc/kernel.cpp#L361
Removing @skipIfRocm from tests
Test Plan
Run this branch in both Nvidia machine and AMD machine, check if it installs and run the unit tests for the cuda extensions:
Test Result
Number of passed unit tests:
Submission Checklist