Decent-DP is a cutting-edge PyTorch extension designed to simplify and accelerate decentralized data parallel training. As the official implementation of the paper [ICLR'25] From Promise to Practice: Realizing High-performance Decentralized Training, Decent-DP empowers you to scale multi-worker training efficiently—eliminating centralized bottlenecks and streamlining your deep learning pipelines.
-
Decentralized Architecture
Efficiently distributes training across multiple workers without relying on a central coordinator. -
Seamless PyTorch Integration
Easily plug into your existing PyTorch codebase with minimal modifications. -
High-Performance
Optimized for speed and scalability based on state-of-the-art research. -
Flexible and Extensible
Supports various algorithmic schemas to suit different training scenarios and model architectures.
- Python 3.11+
- PyTorch
Install directly from PyPI:
pip install decent-dpClone the repository and install in editable mode:
git clone https://github.com/WangZesen/Decent-DP.git
cd Decent-DP
pip install -e .Here is a complete example of how to use Decent-DP to train a model:
import torch
import torch.nn as nn
import torch.distributed as dist
from decent_dp.ddp import DecentralizedDataParallel as DecentDP
from decent_dp.optim import optim_fn_adamw
from decent_dp.utils import initialize_dist
# Initialize distributed environment
rank, world_size = initialize_dist()
# Create your model
model = nn.Sequential(
nn.Linear(10, 50),
nn.ReLU(),
nn.Linear(50, 1)
).cuda()
# Wrap model with DecentDP
model = DecentDP(
model,
optim_fn=optim_fn_adamw, # or your custom optimizer function
topology="complete" # or "ring", "one-peer-exp", "alternating-exp-ring"
)
# Training loop
for epoch in range(num_epochs):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.cuda(), target.cuda()
output = model(data)
loss = nn.functional.mse_loss(output, target)
# Zero gradients, backward pass
model.zero_grad()
loss.backward()
# Note: optimizer.step() is automatically called by DecentDP
# Evaluation
model.eval()
with torch.no_grad():
for data, target in val_loader:
data, target = data.cuda(), target.cuda()
output = model(data)
val_loss = nn.functional.mse_loss(output, target)Launch the script on multiple processes/nodes using torchrun:
torchrun --nproc_per_node=4 your_training_script.pyUnlike traditional centralized approaches where all workers communicate with a single parameter server, decentralized training allows workers to communicate directly with their neighbors. This eliminates bottlenecks and improves scalability.
Decent-DP supports various communication patterns:
- Complete: All workers communicate with each other in each iteration
- Ring: Workers form a ring and communicate with their immediate neighbors
- One-Peer Exponential: Workers communicate with peers at exponentially increasing distances
- Alternating Exponential-Ring: Alternates between exponential and ring communication patterns
Decent-DP automatically groups model parameters into buckets based on size, optimizing communication efficiency during training.
The framework handles gradient accumulation seamlessly, making it easy to simulate larger batch sizes across multiple workers.
Code of experiments conducted in the paper: 🔍 WangZesen/Decentralized-Training-Exp
Comprehensive documentation, including tutorials, API references, and performance tips, is available on the Github page: Decent-DP Documentation
If you use Decent-DP in your research, please cite our work:
@article{wang2025decentralized,
title={From Promise to Practice: Realizing High-Performance Decentralized Training},
author={Wang, Zesen and Zhang, Jiaojiao and Wu, Xuyang and Johansson, Mikael},
journal={arXiv preprint arXiv:2410.11998},
year={2025}
}We welcome contributions from the community!
To get involved:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Submit a pull request with a clear description of your changes.
- For any issues or feature requests, please open an issue on GitHub.
Decent-DP is released under the MIT License.
The computations and storage resources were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement no. 2022-06725.
🚀 Happy training with Decent-DP!
