Official implementation of the paper "Train Once Plan Anywhere Kinodynamic Motion Planning via Diffusion Trees" [CoRL 25]
This repository contains the code and experiment setup for Train-Once Plan-Anywhere Kinodynamic Motion Planning via Diffusion Trees . It includes training configuration, experiment execution scripts, and pretrained model checkpoints.
The associated car dataset and model weights are available for download here.
We recommend creating a Python virtual environment before installing dependencies.
- Clone the repository
git clone https://github.com/Yanivhass/ditree.git
cd ditree- Create a virtual environment
python3 -m venv venv
source venv/bin/activate # On Linux/Mac
venv\Scripts\activate # On Windows- Install PyTorch Visit PyTorch.org to find the correct installation command for your system (CPU or GPU). Example (CUDA 11.8):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118- Install other dependencies We provide here a minimal list of depencencies for running the car experiments. Other functionalities might require additional installation (e.g. Mujoco's Ant requiring Mujoco).
pip install -r requirements.txt.
├── run_scenarios.py # Runs the experiments
├── train_manager.py # Configures and launches training sessions
├── checkpoints/ # Contains pretrained model weights
├── data/ # (optional) Local dataset storage
└── requirements.txt # Python dependencies
python run_scenarios.pypython train_manager.pyPretrained model weights and the car dataset can be downloaded from: Google Drive Link AntMaze dataset is available using Minari Minari Place the downloaded files into:
checkpoints/
data/
If you use this work, please cite:
@inproceedings{
hassidof2025trainonce,
title={Train-Once Plan-Anywhere Kinodynamic Motion Planning via Diffusion Trees},
author={Yaniv Hassidof and Tom Jurgenson and Kiril Solovey},
booktitle={9th Annual Conference on Robot Learning},
year={2025},
url={https://openreview.net/forum?id=lJWUourMTT}
}