Generalize emerging optimizer integration#3325
Conversation
Signed-off-by: Hao Wu <skyw@nvidia.com>
Signed-off-by: Hao Wu <skyw@nvidia.com>
Signed-off-by: Hao Wu <skyw@nvidia.com>
There was a problem hiding this comment.
The get emerging optimizer generalization looks good to me, and we only need to keep tensor parallel version separate(maybe generalize it too, we can figure how as we add new optimizer)
On the other hand, I've always not on board with the config refactor. I think separating config into AdamConfig/MuonConfig basically does nothing. It also doesn't allow user to fine grained control which parameter(for example linear weight) to use which config.
Here is my proposal:
- we keep optimizer config as it is and remove sub classes, and keep optimizer str field.
- this way we can eventually move paramter separation(linear weight) logic out of optimizer init and it become just another override similar to any wd/lr adjustment
- with above, the freeze/unfreeze hack will no longer be needed and get_emerging_optimizer logic can be simplified into calling _get_param_groups() single time, and create different opt for different group
- eventually get_emerging_opt can re-merge with get_megatron_opt
My draft change on top of this
FDecaYed@4d9b0e5
That said, we can move forward with current approach so we can soon add more optimizers and do above in a separate PR
Signed-off-by: Hao Wu <skyw@nvidia.com>
|
discussed with @skyw, updated version of proposed changes |
|
closing in favor of #3618 |
What does this PR do ?
Current integration is strongly Muon focused. But there are more optimizers are getting popular in the community that we want to also support.
Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.