Skip to content

Generalize emerging optimizer integration#3325

Closed
skyw wants to merge 5 commits intoNVIDIA:devfrom
skyw:generalize_emerging_optimizer
Closed

Generalize emerging optimizer integration#3325
skyw wants to merge 5 commits intoNVIDIA:devfrom
skyw:generalize_emerging_optimizer

Conversation

@skyw
Copy link
Contributor

@skyw skyw commented Feb 9, 2026

What does this PR do ?

Current integration is strongly Muon focused. But there are more optimizers are getting popular in the community that we want to also support.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

skyw added 2 commits February 9, 2026 12:34
Signed-off-by: Hao Wu <skyw@nvidia.com>
Signed-off-by: Hao Wu <skyw@nvidia.com>
@skyw skyw requested a review from FDecaYed February 9, 2026 21:21
@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 9, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copy link
Contributor

@FDecaYed FDecaYed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The get emerging optimizer generalization looks good to me, and we only need to keep tensor parallel version separate(maybe generalize it too, we can figure how as we add new optimizer)

On the other hand, I've always not on board with the config refactor. I think separating config into AdamConfig/MuonConfig basically does nothing. It also doesn't allow user to fine grained control which parameter(for example linear weight) to use which config.

Here is my proposal:

  • we keep optimizer config as it is and remove sub classes, and keep optimizer str field.
  • this way we can eventually move paramter separation(linear weight) logic out of optimizer init and it become just another override similar to any wd/lr adjustment
  • with above, the freeze/unfreeze hack will no longer be needed and get_emerging_optimizer logic can be simplified into calling _get_param_groups() single time, and create different opt for different group
  • eventually get_emerging_opt can re-merge with get_megatron_opt

My draft change on top of this
FDecaYed@4d9b0e5

That said, we can move forward with current approach so we can soon add more optimizers and do above in a separate PR

Signed-off-by: Hao Wu <skyw@nvidia.com>
@FDecaYed
Copy link
Contributor

FDecaYed commented Feb 11, 2026

discussed with @skyw, updated version of proposed changes
FDecaYed@71fdb01

@skyw
Copy link
Contributor Author

skyw commented Feb 26, 2026

closing in favor of #3618

@skyw skyw closed this Feb 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants