Skip to content

Save replicated tensors per-TP-rank and load with file locality#3392

Draft
sbak5 wants to merge 1 commit intoNVIDIA:mainfrom
sbak5:sbak/aligned_load
Draft

Save replicated tensors per-TP-rank and load with file locality#3392
sbak5 wants to merge 1 commit intoNVIDIA:mainfrom
sbak5:sbak/aligned_load

Conversation

@sbak5
Copy link
Contributor

@sbak5 sbak5 commented Feb 12, 2026

Save path:

  • Assign each TP rank a unique checkpoint key for replicated tensors (._tp_replica{tp_rank}) so each rank writes to its own file.
  • Set replica_id to (0, 0, dp_replica_id) for these tensors.

Load path:

  • Build FQN→source_rank from checkpoint metadata (distcp filenames).
  • In distribute_shards_to_ranks, group shards by source file and assign each group to that rank so each rank reads from its own file when possible, reducing file opens on distributed filesystems.
  • Pass checkpoint_dir into apply_loading_parallelization to enable file-locality-aware distribution.

Backward compatibility:

  • Detect checkpoints saved without TP-replica postfix keys and strip the postfix from load-side ShardedTensor keys so they match metadata.

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

Save path:
- Assign each TP rank a unique checkpoint key for replicated tensors
  (.__tp_replica_{tp_rank}) so each rank writes to its own file.
- Set replica_id to (0, 0, dp_replica_id) for these tensors.

Load path:
- Build FQN→source_rank from checkpoint metadata (distcp filenames).
- In distribute_shards_to_ranks, group shards by source file and
  assign each group to that rank so each rank reads from its own file
  when possible, reducing file opens on distributed filesystems.
- Pass checkpoint_dir into apply_loading_parallelization to enable
  file-locality-aware distribution.

Backward compatibility:
- Detect checkpoints saved without TP-replica postfix keys and strip
  the postfix from load-side ShardedTensor keys so they match metadata.

Co-authored-by: Antoni-Joan Solergibert <asolergibert@nvidia.com>
@sbak5 sbak5 requested a review from asolergi-nv February 12, 2026 21:33
@sbak5 sbak5 requested review from a team as code owners February 12, 2026 21:33
@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 12, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ko3n1g ko3n1g requested a review from a team February 12, 2026 21:33
@Phlip79 Phlip79 added Expert Review Apply this label to indicate that your PR is ready for expert review. complexity: medium labels Feb 18, 2026
@ericharper ericharper removed the Expert Review Apply this label to indicate that your PR is ready for expert review. label Feb 20, 2026
@Phlip79
Copy link
Member

Phlip79 commented Mar 4, 2026

We are changing our review process and marking all open, unlabeled PRs as draft. This change will go in effect starting once #3659 is merged.

Moving forward, all PRs will be required to start as draft PRs. If you wish to get your PR merged, mark your PR as “Ready for review”. Read more about the new process at submit.md.

@Phlip79 Phlip79 marked this pull request as draft March 4, 2026 23:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants