Fix memory issue in mxfp8 model init#3461
Conversation
|
Please add further comments in the code explaining why the code breaks if |
|
/ok to test 824d127 |
Thanks. Done. |
|
@ko3n1g @chtruong814 |
|
/ok to test 13b72d8 |
@ko3n1g, there was an error processing your request: See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/2/ |
|
/ok to test bff91de |
| # we only need to map bf16 weights (layernorm, embedding, etc) to the buffer. | ||
| if self.ddp_config.reuse_grad_buf_for_mxfp8_param_ag: | ||
| if not is_mxfp8tensor(param) and not is_float8tensor(param): | ||
| if self.param_data is not None: |
There was a problem hiding this comment.
Should we make this a small helper function? Seems identical to code in the else block.
There was a problem hiding this comment.
Avoided this according to @kunlunl 's suggestion above.
Signed-off-by: qiyuw <qiyuw@nvidia.com>
| # For the mxfp8_param with "reuse_grad_buf_for_mxfp8_param_ag=True", | ||
| # we need to copy the param_data from the shared_param/grad_buffer to param.data | ||
| # after the param all-gather. | ||
| # after the param all-gather. |
|
/ok to test 36548ac |
|
/ok to test f9e95ee |
|
Can we add a unit test for this? |
Signed-off-by: qiyuw <qiyuw@nvidia.com>
|
/ok to test 6656cb2 |
Head branch was pushed to by a user without write access
|
/ok to test 77a0371 |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/22694306199 |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/22695853542 |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/22701050986 |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/22705408197 |
What does this PR do ?
In current fp8_recipe=mxfp8, fp8_param_gather=true, reuse_grad_buf_for_mxfp8_param_ag=true workflow, there will two buffer allocation for bf16 weights (laynorm, etc): one is for module parameter, another one is dist opt buffer allocation. There are also unnecessary copies for bf16 weights.
This PR addresses the above issue for the bf16 weights cases in a mxfp8 model.
The change passed the test using test_mxfp8.py.
Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.