Skip to content

Override moe_token_dispatcher_type to "alltoall" when export megatron…#2658

Open
jaeminh wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
jaeminh:main
Open

Override moe_token_dispatcher_type to "alltoall" when export megatron…#2658
jaeminh wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
jaeminh:main

Conversation

@jaeminh
Copy link

@jaeminh jaeminh commented Mar 5, 2026

What does this PR do ?

When exporting a Megatron checkpoint to HF format, an error AssertionError: Flex token dispatcher requires TPxEP > 1 occurs if moe_token_dispatcher_type is set to flex.

This PR overrides moe_token_dispatcher_type during the export process and temporarily switches flex to alltoall only for export. This avoids the assertion without affecting the original configuration.

Changelog

  • Add specific line by line info of high level changes in this PR.

GitHub Actions CI

See the CI sectionin the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

Summary by CodeRabbit

  • Refactor
    • Modified model checkpoint export and loading mechanisms to adjust the initialization sequence.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 5, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

📝 Walkthrough

Walkthrough

The change adds an initial model load with MOE token dispatcher override settings in the export_ckpt function, immediately followed by a standard model load without overrides. The first load is overwritten by the second, making the initial load unused.

Changes

Cohort / File(s) Summary
Model Loading
src/megatron/bridge/models/conversion/auto_bridge.py
Added redundant consecutive loads of Megatron model: first with moe_token_dispatcher_type override, then immediately without overrides, overwriting the first load.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Suggested labels

r0.3.0

Suggested reviewers

  • yaoyu-33
  • malay-nagda
🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: overriding moe_token_dispatcher_type to 'alltoall' during Megatron export. It is specific, concise, and directly reflects the primary purpose of the PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Test Results For Major Changes ✅ Passed Minor bug fix (+6/-0 lines) for AssertionError during model export; targeted scope without major feature, breaking changes, or performance implications.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/megatron/bridge/models/conversion/auto_bridge.py`:
- Around line 838-844: The second call to load_megatron_model immediately
overwrites the model with the mp_overrides, making the
{"moe_token_dispatcher_type": "alltoall"} override ineffective; remove the
redundant call that reassigns megatron_model (the load_megatron_model(...,
wrap_with_ddp=False) on line after the override) so that megatron_model retains
the mp_overrides before it is passed to save_hf_pretrained and any assertions
about the dispatcher type succeed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 85ff4248-ad81-4ba7-a887-9eeee6b942ac

📥 Commits

Reviewing files that changed from the base of the PR and between 15d758f and a2e8207.

📒 Files selected for processing (1)
  • src/megatron/bridge/models/conversion/auto_bridge.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant