Skip to content

valbad/pudm_extension

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PUDM Extension — Point Cloud Upsampling with Swappable Generative Strategies

A clean, modular reimplementation of PUDM (CVPR 2024) that supports both DDPM and Flow Matching as generative strategies for point cloud upsampling.

Project Structure

pudm_extension/
├── configs/              # Experiment configs (PU1K.json, PUGAN.json)
├── compile_ops.sh        # Build CUDA extensions
├── notebooks/            # Colab notebooks (one per strategy)
│   ├── pudm_ddpm.ipynb
│   └── pudm_flow_matching.ipynb
├── src/
│   ├── data/             # Datasets (PU1K, PUGAN) and augmentation
│   ├── generative/       # Strategy pattern: DDPM, Flow Matching
│   │   ├── base.py       # Abstract GenerativeStrategy
│   │   ├── ddpm.py       # DDPMStrategy (T=1000)
│   │   └── flow_matching.py  # FlowMatchingStrategy (ODE)
│   ├── metrics/          # Chamfer Distance, Hausdorff Distance
│   ├── models/           # PointNet2 backbone with cross-attention
│   ├── ops/              # CUDA ops (pointnet2_ops, pointops)
│   ├── scripts/          # Train, sample, evaluate (strategy-agnostic)
│   └── utils/            # Config, seed, point cloud helpers
└── tests/

Setup (Colab)

Two ready-to-run Google Colab notebooks are provided (T4 GPU or better):

  • notebooks/pudm_ddpm.ipynb — DDPM training & evaluation
  • notebooks/pudm_flow_matching.ipynb — Flow Matching training & evaluation

Each notebook handles the full pipeline end-to-end:

  1. Cloning & installation — repo clone, pip install
  2. CUDA extension compilation — pointnet2_ops (JIT) and pointops (build_ext), with compiled .so files cached on Google Drive so recompilation is skipped on subsequent runs
  3. Data preparation — zip extraction to local disk, Drive-cached for fast restore across sessions
  4. Training — mixed precision (fp16 via AMP), checkpoints saved to Drive
  5. Evaluation — Chamfer Distance, Hausdorff Distance, P2F metrics

Manual Setup

# 1. Install Python dependencies
pip install -r requirements.txt

# 2. Compile CUDA extensions
bash compile_ops.sh

Usage

All scripts accept --strategy {ddpm,flow_matching} to select the generative method.

Training

# DDPM (baseline)
python -m src.scripts.train -c configs/PU1K.json --strategy ddpm

# Flow Matching
python -m src.scripts.train -c configs/PU1K.json --strategy flow_matching

Sampling

python -m src.scripts.sample -c configs/PU1K.json --strategy ddpm --ckpt_iter 2000

Evaluation

Evaluation runs automatically at the end of sampling and reports Chamfer Distance (CD) and Hausdorff Distance (HD).

Single-File Inference

python -m src.scripts.example_sample \
    -c configs/PU1K.json \
    --strategy ddpm \
    --ckpt_path logs/checkpoint/pointnet_ema_2000.pkl \
    --input_xyz path/to/input.xyz

Generative Strategies

Each strategy has its own config section in the JSON files:

{
    "ddpm_config": {
        "T": 1000,
        "beta_0": 0.0001,
        "beta_T": 0.02
    },
    "flow_matching_config": {
        "T": 1000,
        "num_steps": 100
    }
}

The correct section is selected automatically based on the --strategy flag.

Strategy Method Sampling Config Key Key Params
ddpm Denoising Diffusion T-step reverse + DDIM ddpm_config T, beta_0, beta_T
flow_matching Conditional Flow Matching Euler ODE integration flow_matching_config T, num_steps

Both strategies share the same PointNet2 backbone and condition encoder. The only difference is how noisy/interpolated samples are created during training and how denoising/integration proceeds during inference.

Adding a New Strategy

  1. Subclass GenerativeStrategy in src/generative/
  2. Implement compute_hyperparams(), training_loss(), sample(), and name
  3. Register in src/generative/__init__.pySTRATEGIES dict

Credits

Based on PUDM: Point Cloud Upsampling via Denoising Diffusion Model (CVPR 2024).

About

Extension of PUDM model for point cloud upsampling, featuring modernized code base and flow matching generation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors