Skip to content

fix: skip FSDP DTensor boundary validation under fake process group#3686

Open
Victarry wants to merge 1 commit intoNVIDIA:mainfrom
Victarry:denliu/fix-fsdp-fake-pg-compat-main
Open

fix: skip FSDP DTensor boundary validation under fake process group#3686
Victarry wants to merge 1 commit intoNVIDIA:mainfrom
Victarry:denliu/fix-fsdp-fake-pg-compat-main

Conversation

@Victarry
Copy link
Contributor

@Victarry Victarry commented Mar 4, 2026

Mirror of #3668
The validate_uneven_dtensor function uses all_reduce(MAX) across all ranks to verify that local shards collectively cover the full global tensor. Under fake process group (backend='fake'), all collective operations are no-ops, so only rank 0's boundaries are visible — the end-boundary check always fails.

Skip the boundary validation when the distributed backend is 'fake', since fake process group is only used for memory profiling where numerical correctness is irrelevant.

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

The `validate_uneven_dtensor` function uses `all_reduce(MAX)` across
all ranks to verify that local shards collectively cover the full
global tensor. Under fake process group (backend='fake'), all
collective operations are no-ops, so only rank 0's boundaries are
visible — the end-boundary check always fails.

Skip the boundary validation when the distributed backend is 'fake',
since fake process group is only used for memory profiling where
numerical correctness is irrelevant.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Victarry Victarry requested review from a team as code owners March 4, 2026 04:09
@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 4, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 4, 2026 04:09
Copy link
Member

@cspades cspades left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, is this the only modification we need to make to Megatron-FSDP to support a mock backend across all cases? Now that I think about it, we there aren't many places we validate the direct output of a collective...

@Phlip79 Phlip79 added the Final Review PR is in the "final review" stage label Mar 4, 2026
@Victarry
Copy link
Contributor Author

Victarry commented Mar 5, 2026

LGTM, is this the only modification we need to make to Megatron-FSDP to support a mock backend across all cases? Now that I think about it, we there aren't many places we validate the direct output of a collective...

Yeah. I verified the FSDP with fake process group could work with this fix,.

@Victarry
Copy link
Contributor Author

Victarry commented Mar 5, 2026

/ok to test 0f85443

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Final Review PR is in the "final review" stage

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants