Skip to content

[Dev][feat] Support CUDA Graph capture offloading modules#3219

Open
lhb8125 wants to merge 101 commits intoNVIDIA:devfrom
lhb8125:hongbinl/activation_offloading_refactor_cuda_graph
Open

[Dev][feat] Support CUDA Graph capture offloading modules#3219
lhb8125 wants to merge 101 commits intoNVIDIA:devfrom
lhb8125:hongbinl/activation_offloading_refactor_cuda_graph

Conversation

@lhb8125
Copy link
Contributor

@lhb8125 lhb8125 commented Feb 3, 2026

What does this PR do ?

PR to main branch

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

lhb8125 and others added 30 commits October 29, 2025 02:46
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
2. refine offloading docs;
3. remove TransformerLayer.cuda_graph_stream and cuda_graph_event

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125 lhb8125 force-pushed the hongbinl/activation_offloading_refactor_cuda_graph branch from d34be1d to 998d1b0 Compare March 2, 2026 03:50
@lhb8125
Copy link
Contributor Author

lhb8125 commented Mar 2, 2026

/ok to test 4bf0085

lhb8125 added 2 commits March 1, 2026 20:30
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Mar 2, 2026

/ok to test cd84623

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Mar 2, 2026

/ok to test 61b589a

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Mar 2, 2026

/ok to test 0200121

Fine-grained offloading is compatible with CUDA graphs. When CUDA graph is enabled, the following constraints apply:

- `attn_norm` and `mlp_norm` **cannot** be offloaded (they cross CUDA graph boundaries).
- `cuda_graph_scope` must include `attn` and `moe_router`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I use "moe" scope if I'm in a drop-pad MoE?

Can I offload attention part modules if my cuda graph scope is only "moe_router"? This may be needed since some cases have dynamic-shaped attention so only the router part can be captured.

Copy link
Contributor Author

@lhb8125 lhb8125 Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed this hard limitation, now the scope could be moe_router alone or moe.


Fine-grained offloading is compatible with CUDA graphs. When CUDA graph is enabled, the following constraints apply:

- `attn_norm` and `mlp_norm` **cannot** be offloaded (they cross CUDA graph boundaries).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unless using "moe" cudagrpah scope in a drop-pad or sync-free MoE.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if we only capture moe_router or moe_preprocess? Is it still true?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so. If we only capture moe_router, mlp_norm works as the input buffer of the graph, so not offloadable. The only exception is that we use attn+moe scope for drop-pad MoE, then the mlp_norm is totally inside the graph, so offloadable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw you cannot only capture moe_preprocess . moe_preprocess must go together with moe_router .

@lhb8125 lhb8125 changed the title Support CUDA Graph capture offloading modules [Dev][feat] Support CUDA Graph capture offloading modules Mar 4, 2026
hidden_states: Tensor,
inference_context: BaseInferenceContext | None = None,
padding_mask: Tensor | None = None,
flush_delayed_groups: bool = True,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since flush_delayed_groups is cudagraph specific can it be moved to cudagraph specific code? If this just needs to run after warmup can it be passed as TE's make_graphed_callable(post_warmup_hook=) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments, I removed the function call in forward() and kept only the call in _te_cuda_graph_replay. Now in the warmup iterations, we launch the offloading immediately and in the replay iterations, we delay the offloading and flush it after graph replay.

The "warmup" here is a little vague:

  1. The first several iterations are warmup iterations, after which we start graph capturing.
  2. Before capturing cuda graph, TE runs several fprop&bprop steps to "warmup";

In previous code, the flush_delayed_groups is executed in the end of forward() in the warmup iterations(1st case).

lhb8125 and others added 4 commits March 5, 2026 01:05
2. remove flush_delayed_groups() when the training is not in replay mode

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Mar 5, 2026

/ok to test b481fa9

@lhb8125
Copy link
Contributor Author

lhb8125 commented Mar 5, 2026

/ok to test ce84682

3. **Apply fraction**: Only a fraction of eligible groups are actually offloaded (controlled by `activation_offload_fraction`).
4. **Print summary table**: An ASCII table of per-rank offload bytes is printed for debugging.

### CPU Tensor Pool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GPU Tensor Pool?


### Warmup and Adaptive Offloading

The first training iteration serves as a **warmup phase** where the manager records tensor groups, their sizes, and the execution order. After warmup, a `post_warmup_callback` runs to:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we cannot capture cudagraphs on the first training iteration? If so, we should assert cuda_graph_warmup_steps>0 when offloading is enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

complexity: medium core_dev_r0.16.0 Cherry-pick label for core_dev_r0.16.0 release branch dev branch Dev branch related issues and development Expert Review Apply this label to indicate that your PR is ready for expert review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants