Skip to content

Fix memory issue in mxfp8 model init#3461

Queued
WanZzzzzz wants to merge 12 commits intoNVIDIA:mainfrom
WanZzzzzz:mxfp8-ag
Queued

Fix memory issue in mxfp8 model init#3461
WanZzzzzz wants to merge 12 commits intoNVIDIA:mainfrom
WanZzzzzz:mxfp8-ag

Conversation

@WanZzzzzz
Copy link

@WanZzzzzz WanZzzzzz commented Feb 17, 2026

What does this PR do ?

In current fp8_recipe=mxfp8, fp8_param_gather=true, reuse_grad_buf_for_mxfp8_param_ag=true workflow, there will two buffer allocation for bf16 weights (laynorm, etc): one is for module parameter, another one is dist opt buffer allocation. There are also unnecessary copies for bf16 weights.
This PR addresses the above issue for the bf16 weights cases in a mxfp8 model.
The change passed the test using test_mxfp8.py.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@WanZzzzzz WanZzzzzz requested review from a team as code owners February 17, 2026 20:07
@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 17, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ko3n1g ko3n1g requested a review from a team February 17, 2026 20:07
@Phlip79
Copy link
Member

Phlip79 commented Feb 17, 2026

Please add further comments in the code explaining why the code breaks if is_bf16_weight_group is True.

@Phlip79
Copy link
Member

Phlip79 commented Feb 17, 2026

/ok to test 824d127

@WanZzzzzz
Copy link
Author

Please add further comments in the code explaining why the code breaks if is_bf16_weight_group is True.

Thanks. Done.

@chtruong814 chtruong814 added the needs-follow-up Issue needs follow-up label Feb 20, 2026
@WanZzzzzz
Copy link
Author

@ko3n1g @chtruong814
I made changes according to the comments by @Phlip79 . What else needs to be done?

@chtruong814 chtruong814 removed the needs-follow-up Issue needs follow-up label Feb 20, 2026
@ko3n1g
Copy link
Contributor

ko3n1g commented Feb 20, 2026

/ok to test 13b72d8

@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 20, 2026

/ok to test 13b72d8

@ko3n1g, there was an error processing your request: E2

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/2/

@ko3n1g
Copy link
Contributor

ko3n1g commented Feb 20, 2026

/ok to test bff91de

# we only need to map bf16 weights (layernorm, embedding, etc) to the buffer.
if self.ddp_config.reuse_grad_buf_for_mxfp8_param_ag:
if not is_mxfp8tensor(param) and not is_float8tensor(param):
if self.param_data is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we make this a small helper function? Seems identical to code in the else block.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoided this according to @kunlunl 's suggestion above.

Signed-off-by: qiyuw <qiyuw@nvidia.com>
Copy link
Contributor

@kunlunl kunlunl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved. But there is a line with extra spaces which should be removed.(I removed it)

# For the mxfp8_param with "reuse_grad_buf_for_mxfp8_param_ag=True",
# we need to copy the param_data from the shared_param/grad_buffer to param.data
# after the param all-gather.
# after the param all-gather.
Copy link
Contributor

@kunlunl kunlunl Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extra spaces.

@kunlunl
Copy link
Contributor

kunlunl commented Feb 26, 2026

/ok to test 36548ac

@kunlunl
Copy link
Contributor

kunlunl commented Feb 26, 2026

/ok to test f9e95ee

@deepakn94
Copy link
Contributor

Can we add a unit test for this?

@deepakn94 deepakn94 removed the request for review from a team March 4, 2026 16:19
@deepakn94
Copy link
Contributor

/ok to test 6656cb2

@deepakn94 deepakn94 enabled auto-merge March 4, 2026 16:19
Signed-off-by: qiyuw <qiyuw@nvidia.com>
auto-merge was automatically disabled March 4, 2026 16:55

Head branch was pushed to by a user without write access

@deepakn94
Copy link
Contributor

/ok to test 77a0371

@deepakn94 deepakn94 added this pull request to the merge queue Mar 4, 2026
@svcnvidia-nemo-ci
Copy link

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/22694306199

@svcnvidia-nemo-ci
Copy link

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/22695853542

@svcnvidia-nemo-ci
Copy link

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/22701050986

@svcnvidia-nemo-ci
Copy link

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/22705408197

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants