Skip to content

flamschou/DiTree_MVA

 
 

Repository files navigation

DiTree: Kinodynamic Motion Planning via Diffusion Trees (Guided Extension)

Report: More details here in report.pdf

MVA Project - Robotics Extension of the CoRL 2025 paper by Hassidof et al.


🚀 Our Contribution: Guided Tree Search

While the original DiTree relies on global diffusion policies, exploring complex mazes with narrow passages remains challenging. In this fork, we implemented a Guided Tree Search strategy to bias the exploration.

Key Features Implemented:

  • BFS Initialization: We introduced a discrete grid-based BFS pre-processing step (generate_intermediate_goals) to find a coarse path skeleton avoiding static obstacles.
  • Intermediate Goals: The planner now automatically generates a sequence of intermediate waypoints along the BFS path.
  • Biased Sampling Strategy: Modified the RRT.py expansion loop. Instead of sampling purely randomly or towards the final goal, the tree now progressively targets the next intermediate goal based on a proximity cost heuristic (get_distance_to_goal).

Result: This approach significantly reduces the time to find a first solution in "Maze" environments by guiding the diffusion sampler through bottlenecks.


DiTree

project page , paper

Official implementation of the paper "Train Once Plan Anywhere Kinodynamic Motion Planning via Diffusion Trees" [CoRL 25]

This repository contains the code and experiment setup for Train-Once Plan-Anywhere Kinodynamic Motion Planning via Diffusion Trees . It includes training configuration, experiment execution scripts, and pretrained model checkpoints.

The associated car dataset and model weights are available for download here.


📦 Installation

We recommend creating a Python virtual environment before installing dependencies.

  1. Clone the repository
git clone https://github.com/Yanivhass/ditree.git
cd ditree
  1. Create a virtual environment
python3 -m venv venv
source venv/bin/activate   # On Linux/Mac
venv\Scripts\activate      # On Windows
  1. Install PyTorch Visit PyTorch.org to find the correct installation command for your system (CPU or GPU). Example (CUDA 11.8):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
  1. Install other dependencies We provide here a minimal list of depencencies for running the car experiments. Other functionalities might require additional installation (e.g. Mujoco's Ant requiring Mujoco).
pip install -r requirements.txt

📂 Project Structure

.
├── run_scenarios.py      # Runs the experiments
├── train_manager.py      # Configures and launches training sessions
├── checkpoints/          # Contains pretrained model weights
├── data/                 # (optional) Local dataset storage
└── requirements.txt      # Python dependencies

🚀 Usage

Running Experiments

python run_scenarios.py

Training

python train_manager.py

📥 Pretrained Models & Dataset

Pretrained model weights and the car dataset can be downloaded from: Google Drive Link AntMaze dataset is available using Minari Minari Place the downloaded files into:

checkpoints/
data/

Citation

If you use this work, please cite:

@inproceedings{
hassidof2025trainonce,
title={Train-Once Plan-Anywhere Kinodynamic Motion Planning via Diffusion Trees},
author={Yaniv Hassidof and Tom Jurgenson and Kiril Solovey},
booktitle={9th Annual Conference on Robot Learning},
year={2025},
url={https://openreview.net/forum?id=lJWUourMTT}
}




About

[CoRL 25] Official implementation of the paper "Train-Once Plan-Anywhere Kinodynamic Motion Planning via Diffusion Trees"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 100.0%