Skip to content

[QWen3_VL] [DistTrain] Disaggregated training system #1632

@shifangx

Description

@shifangx

[DistTrain] Disaggregated training system

This issue is used to track implementation of [DistTrain] Disaggregated training system for multi-modality model training.
We plan setup an end-to-end example for training QWen3-VL-235B-A22B-Instruct with MDP.

Note:
Both MDPand DistTrain make encoder and LLM backend have different DP/TP/PP size.
The main difference between MDP and DistTrain is that MDP makes Encoder and LLM backbone collocated on GPUs, while DistTrain places Encoder and LLM backbone on separative GPUs.

reference

dependcy

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions