Skip to content

Conversation

@onuryuruten
Copy link

Description

Adds a safety check before accessing torch.xpu.is_available() to prevent AttributeError when using PyTorch versions < 2.4.

Problem

The code currently calls torch.xpu.is_available() without checking if the xpu attribute exists. The torch.xpu module was added in PyTorch 2.4+, causing an AttributeError on earlier versions, e.g. 2.2.2:

AttributeError: module 'torch' has no attribute 'xpu'

This can break the applications running PyTorch 2.0-2.3, even though F5-TTS's pyproject.toml only requires torch>=2.0.0.

Solution

Add hasattr(torch, 'xpu') check before calling torch.xpu.is_available(). This ensures backward compatibility with PyTorch 2.0+ while still supporting Intel XPU acceleration when available.

Changes

  • Modified src/f5_tts/infer/utils_infer.py line 42
  • Changed: if torch.xpu.is_available():
  • To: if hasattr(torch, 'xpu') and torch.xpu.is_available():

Testing

  • Tested with PyTorch 2.2.2 (previously failing) ✓
  • Tested with PyTorch 2.5+ (xpu available) ✓
  • Application now runs successfully on both versions

Impact

  • Fixes compatibility with PyTorch 2.0-2.3
  • No breaking changes for existing users
  • Maintains Intel XPU support for newer PyTorch versions

Problem: torch.xpu.is_available() can trigger an exception for users running PyTorch 2.0-2.3.

Solution: Add hasattr(torch, 'xpu') check before calling torch.xpu.is_available(). This ensures backward compatibility with PyTorch 2.0+ while still supporting Intel XPU acceleration when available.
Fix device selection logic for XPU availability
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant