Skip to content

Conversation

@godnight10061
Copy link
Contributor

Problem

When users pin PyTorch 2.7.x (or torchvision 0.22.x / torchaudio 2.7.x), torchruntime may still select the cu124 index URL based on detected hardware/Python version.

PyTorch 2.7 wheels are published under cu128, so pip fails with No matching distribution found when it is pointed at the cu124 index.

Conversely, if hardware defaults to cu128 but the caller pins torch <=2.6, those wheels live under cu124 and pip fails for the same reason.

This is the root cause of #16.

Solution

  • Infer the required CUDA wheel index (cu124 vs cu128) from pinned torch / torchvision / torchaudio requirements passed to torchruntime.install().
  • If the inferred index conflicts with the detected CUDA platform, override only the cu124/cu128 portion so the correct https://download.pytorch.org/whl/<platform> index URL is used.
  • Preserve the nightly/ prefix when applicable.
  • Do not demote Blackwell (arch 12) from cu128 -> cu124 since older torch versions won’t support it anyway.

If the pinned requirements imply conflicting indices, torchruntime leaves the platform unchanged and lets pip resolve/fail normally.

Tests

  • Added regression tests that fail on main and pass with this change.
  • python -m pytest (141 tests) passes.

Fixes #16.

When users pin torch/torchvision/torchaudio around 2.7, torchruntime now switches between cu124 and cu128 index URLs so pip can resolve wheels. Fixes easydiffusion#16.
@godnight10061 godnight10061 marked this pull request as draft December 23, 2025 00:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Difficulties with NVIDIA 50xx Support

1 participant