Draft
Conversation
Signed-off-by: xiaoyao0115 <1804647152@qq.com>
Signed-off-by: tailaim <tailaim@nvidia.com>
Signed-off-by: xiaoyao0115 <1804647152@qq.com>
Member
|
We are changing our review process and marking all open, unlabeled PRs as draft. This change will go in effect starting once #3659 is merged. Moving forward, all PRs will be required to start as draft PRs. If you wish to get your PR merged, mark your PR as “Ready for review”. Read more about the new process at submit.md. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR adds Sequence Packing (THD format) E2E support to MCore. Dev branch PR:#2924
The core missing functionalities of THD in MCore are:
Key Changes
1. Add a data_iterator wrapper (megatron/core/datasets/data_schedule.py::wrap_dataloader)
A wrapper function that intercepts the data iterator to perform scheduling and packing:
cu_seqlensmetadata.num_microbatchesalong with two parameters for FLOPs calculation:num_total_tokens_this_global_batchandsequence_square_sum_this_global_batch.num_microbatchesand FLOPs parameters across TP ranks since only TP rank 0 has access to the data iterator.cu_seqlens,cu_seqlens_padded,max_seqlen, etc.) to be broadcast from PP rank 0 for correct computation.2. Mock SFT Dataset Support
Supports mock datasets for testing and benchmarking with configurable sequence length distributions.
There are two modes of mock sft dataset:
{"mode": "file", "path": "/path/to/seqlens.csv"}{"mode": "distribution", "type": "lognormal", "min_seq_len": 1024, "max_seq_len": 8192, "mean_seq_len": 4096, "lognormal_sigma": 1.1}Architecture
Before vs After
graph LR subgraph Before A1[DataIterator] --> B1[get_batch] B1 --> C1[forward_backward] C1 --> D1[Fixed seq_len FLOPs] end subgraph After A2[DataIterator] --> W[wrap_dataloader] W -->|schedule + pack| B2[PackedDataIterator] W -->|broadcast| M[num_microbatches + flops_params] B2 --> C2[get_batch_for_sequence_packing] C2 --> D2[forward_backward] D2 --> E2[Dynamic FLOPs] M --> E2 endExecution Flow
sequenceDiagram participant Train as training.py participant Schedule as schedules.py participant Wrap as wrap_iterator_helper participant DataSched as data_schedule.py participant GetBatch as get_batch_for_seq_packing Train->>Schedule: forward_backward_*(data_iterator) Schedule->>Wrap: wrap_iterator_helper(config, data_iterator) Wrap->>DataSched: wrap_dataloader(data_iterator, scheduler_type) Note over DataSched: 1. Gather global seqlens across DP Note over DataSched: 2. Scheduler assigns sequences to microbatches Note over DataSched: 3. All-to-all redistribute samples Note over DataSched: 4. Pack into microbatches Note over DataSched: 5. Broadcast to TP/PP ranks DataSched-->>Schedule: (packed_iter, num_mbs, total_tokens, seq_sq_sum) loop for each microbatch Schedule->>GetBatch: get_batch_on_this_rank_for_sequence_packing Note over GetBatch: Broadcast tokens/labels to TP group Note over GetBatch: Partition for CP if needed GetBatch-->>Schedule: (tokens, labels, loss_mask, pos_ids, packed_seq_params) end Schedule-->>Train: forward_data_store + [total_tokens, seq_sq_sum]New Arguments
--sequence-packing--sequence-packing-schedulerdefaultorempty--sft-mock-dataset-config-jsonChanges
megatron/core/datasets/data_schedule.pymegatron/core/pipeline_parallel/schedules.pymegatron/training/training.pymegatron/training/datasets/sft_dataset.pymegatron/training/arguments.pymegatron/core/model_parallel_config.pytests/unit_tests/test_sequence_packing.pyCode review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.