A clean, modular reimplementation of PUDM (CVPR 2024) that supports both DDPM and Flow Matching as generative strategies for point cloud upsampling.
pudm_extension/
├── configs/ # Experiment configs (PU1K.json, PUGAN.json)
├── compile_ops.sh # Build CUDA extensions
├── notebooks/ # Colab notebooks (one per strategy)
│ ├── pudm_ddpm.ipynb
│ └── pudm_flow_matching.ipynb
├── src/
│ ├── data/ # Datasets (PU1K, PUGAN) and augmentation
│ ├── generative/ # Strategy pattern: DDPM, Flow Matching
│ │ ├── base.py # Abstract GenerativeStrategy
│ │ ├── ddpm.py # DDPMStrategy (T=1000)
│ │ └── flow_matching.py # FlowMatchingStrategy (ODE)
│ ├── metrics/ # Chamfer Distance, Hausdorff Distance
│ ├── models/ # PointNet2 backbone with cross-attention
│ ├── ops/ # CUDA ops (pointnet2_ops, pointops)
│ ├── scripts/ # Train, sample, evaluate (strategy-agnostic)
│ └── utils/ # Config, seed, point cloud helpers
└── tests/
Two ready-to-run Google Colab notebooks are provided (T4 GPU or better):
notebooks/pudm_ddpm.ipynb— DDPM training & evaluationnotebooks/pudm_flow_matching.ipynb— Flow Matching training & evaluation
Each notebook handles the full pipeline end-to-end:
- Cloning & installation — repo clone, pip install
- CUDA extension compilation — pointnet2_ops (JIT) and pointops (
build_ext), with compiled.sofiles cached on Google Drive so recompilation is skipped on subsequent runs - Data preparation — zip extraction to local disk, Drive-cached for fast restore across sessions
- Training — mixed precision (fp16 via AMP), checkpoints saved to Drive
- Evaluation — Chamfer Distance, Hausdorff Distance, P2F metrics
# 1. Install Python dependencies
pip install -r requirements.txt
# 2. Compile CUDA extensions
bash compile_ops.shAll scripts accept --strategy {ddpm,flow_matching} to select the generative method.
# DDPM (baseline)
python -m src.scripts.train -c configs/PU1K.json --strategy ddpm
# Flow Matching
python -m src.scripts.train -c configs/PU1K.json --strategy flow_matchingpython -m src.scripts.sample -c configs/PU1K.json --strategy ddpm --ckpt_iter 2000Evaluation runs automatically at the end of sampling and reports Chamfer Distance (CD) and Hausdorff Distance (HD).
python -m src.scripts.example_sample \
-c configs/PU1K.json \
--strategy ddpm \
--ckpt_path logs/checkpoint/pointnet_ema_2000.pkl \
--input_xyz path/to/input.xyzEach strategy has its own config section in the JSON files:
{
"ddpm_config": {
"T": 1000,
"beta_0": 0.0001,
"beta_T": 0.02
},
"flow_matching_config": {
"T": 1000,
"num_steps": 100
}
}The correct section is selected automatically based on the --strategy flag.
| Strategy | Method | Sampling | Config Key | Key Params |
|---|---|---|---|---|
ddpm |
Denoising Diffusion | T-step reverse + DDIM | ddpm_config |
T, beta_0, beta_T |
flow_matching |
Conditional Flow Matching | Euler ODE integration | flow_matching_config |
T, num_steps |
Both strategies share the same PointNet2 backbone and condition encoder. The only difference is how noisy/interpolated samples are created during training and how denoising/integration proceeds during inference.
- Subclass
GenerativeStrategyinsrc/generative/ - Implement
compute_hyperparams(),training_loss(),sample(), andname - Register in
src/generative/__init__.py→STRATEGIESdict
Based on PUDM: Point Cloud Upsampling via Denoising Diffusion Model (CVPR 2024).