Skip to content

[Windows] Add hasattr checks for distributed to improve compatibility#171

Open
0xDELUXA wants to merge 1 commit intoROCm:tridaofrom
0xDELUXA:rocm-tridao
Open

[Windows] Add hasattr checks for distributed to improve compatibility#171
0xDELUXA wants to merge 1 commit intoROCm:tridaofrom
0xDELUXA:rocm-tridao

Conversation

@0xDELUXA
Copy link

@0xDELUXA 0xDELUXA commented Jan 24, 2026

Motivation

Without these checks, ROCm on Windows raises an AttributeError when importing certain distributed functions.

For example, when adding a custom node to ComfyUI:

  File "E:\ComfyUI\venv\Lib\site-packages\flash_attn\utils\distributed.py", line 12, in <module>
    torch.distributed.all_gather_into_tensor = torch.distributed._all_gather_base
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'torch.distributed' has no attribute '_all_gather_base'

Cannot import E:\ComfyUI\custom_nodes\RES4LYF module for custom nodes: module 'torch.distributed' has no attribute '_all_gather_base'

This PR adds hasattr checks to prevent this error. It is not expected to cause any issues on Linux.

Technical Details

Improves ROCm compatibility with FlashAttention-2 on Windows.

Specifications:
GPU: AMD Radeon RX 9060 XT
Python: 3.12.10
PyTorch: 2.11.0a0+rocm7.11.0a20260121 (TheRock)
Triton: 3.6.0a0.post25 (triton-windows)
OS: Windows 11

Test Plan

Functional testing on AMD Radeon RX 9060 XT (gfx1200).

Test Result

All existing Triton Flash Attention tests pass on gfx1200.

The AttributeError has been resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant