Skip to content

Conversation

@zhangtao0408
Copy link
Owner

Replace funcol.all_to_all_single with torch.distributed.all_to_all_single for tensor communication.

What does this PR do?

Fixes # (issue)

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

Replace funcol.all_to_all_single with torch.distributed.all_to_all_single for tensor communication.
Replaced funcol.all_to_all_single with torch.distributed.all_to_all_single for tensor communication.
@zhangtao0408 zhangtao0408 marked this pull request as draft December 2, 2025 03:57
Refactor distributed tensor operations to handle async operations and wait for completion.
Add handler for async all_to_all_single operation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants