fix: clamp VQ indices to prevent CUDA out-of-bounds errors #86
+8
−3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
[0, codebook_size-1]before passing toResidualVQ.get_output_from_indices()vector-quantize-pytorchProblem
When using
vector-quantize-pytorch >= 1.20with PyTorch 2.10+ and Python 3.14, the codebook indices from HeartMuLa can occasionally exceed the valid codebook range, causing:The error occurs in
einx.get_at()when indexing into the codebook with out-of-bounds values.Solution
Added bounds checking to clamp indices before the VQ lookup:
Test Environment
Test plan
🤖 Generated with Claude Code