Felix Duelmer, Mohammad Farid Azampour, Magdalena Wysocki, Nassir Navab,
Technical University of Munich
Welcome to the official repository for the paper: "UltraRay: Introducing Full-Path Ray Tracing in Physics-Based Ultrasound Simulation", a novel approach to simulating ultrasound images using full-path ray tracing.
UltraRay implements a complete ultrasound simulation pipeline that models the full acoustic path from transducer emission to signal reception. The system consists of three main stages:
- Scene Setup: 3D Scene definition, including transducer setup and tissue property definition + localization.
- Ray Tracing & RF Data Generation: Raw radiofrequency (RF) data generation from traced acoustic paths with proper time-of-flight and attenuation modeling.
- Beamforming & B-Mode Reconstruction: Signal processing to reconstruct B-mode ultrasound images from RF data
Comparison results showing UltraRay's simulation capabilities on cylindrical and vertebrae phantom data
src/: Main simulation entry point and scene configuration files for running ultrasound simulationsultra_ray/: Core UltraRay library containing all simulation components and algorithmsultra_ray/integrators/: Ray tracing integrators that handle acoustic wave propagation and scatteringultra_ray/sensors/: Convex transducer sensor implementations for ultrasound probe modelingultra_ray/beamformers/: Signal processing modules for RF data beamforming and B-mode reconstructionultra_ray/films/: Specialized film classes for capturing RF data from ray tracingultra_ray/render/: Rendering utilities and echo processing blocksultra_ray/utils/: Utility functions including transducer mesh generationscenes/: 3D scene data and objects for simulation (download required - see Data Download section)external/: Modified Mitsuba 3 renderer with ultrasound-specific extensionsresults/: Generated simulation outputs and beamformed imagesassets/: Documentation figures and pipeline diagrams
- Recent versions of CMake (at least 3.9.0)
- Python (at least 3.8)
- Git
- CUDA-capable GPU with up-to-date drivers (minimum driver version v535)
- On Linux: Clang compiler (recommended over GCC)
Clone this repository with all submodules:
git clone --recursive https://github.com/Felixduelmer/UltraRay.git
cd UltraRayIf you already cloned without --recursive, initialize the submodules:
git submodule update --init --recursiveCreate and activate a virtual environment to isolate UltraRay dependencies:
# Create virtual environment
python -m venv ultraray_env
# Activate virtual environment
source ultraray_env/bin/activateStep 3: Build Mitsuba 3 (Further information can be found here)
UltraRay is based on Mitsuba 3 and uses a custom fork with ultrasound-specific modifications. The cuda_mono variant is recommended to run UltraRay. If no GPU is available you can switch to llvm_mono to run the ray tracing in parallel on the available CPU cores.
It is important to build Mitsuba with the same python version that is later going to be used for running the code.
# Navigate to the Mitsuba submodule
cd external/mitsuba
# Install dependencies (Ubuntu/Debian)
sudo apt install clang-10 libc++-10-dev libc++abi-10-dev cmake ninja-build libpng-dev libjpeg-dev libpython3-dev python3-distutils
# Set compiler environment variables
export CC=clang-10
export CXX=clang++-10
# Create build directory
mkdir build
cd build
# Configure with CMake
cmake -GNinja ..# Navigate to the Mitsuba submodule
cd external/mitsuba
# Configure with Visual Studio
cmake -G "Visual Studio 17 2022" -A x64 -B build# Install Xcode command line tools
xcode-select --install
# Navigate to the Mitsuba submodule
cd external/mitsuba
# Create build directory
mkdir build
cd build
# Configure with CMake
cmake -GNinja ..IMPORTANT: Before building, you must configure the required variant:
- Open the generated
mitsuba.conffile in the build directory - Find the
"enabled"section (around line 86) - Add
"cuda_mono"or"llvm_mono"to the list of enabled variants (depending on GPU availability):
"enabled": [
"scalar_rgb",
"scalar_spectral",
"cuda_mono",
"llvm_mono",
"cuda_ad_rgb",
"llvm_ad_rgb"
],# From the mitsuba/build directory
ninjaOn Windows with Visual Studio:
cmake --build build --config ReleaseAfter compilation, configure the environment variables:
# From the mitsuba/build directory
source setpath.shNote: Keep this virtual environment activated for all subsequent steps and when running UltraRay simulations.
Return to the main project directory and install the package. Choose the appropriate installation option based on your system:
# Navigate back to the main project root
cd ../../../# Install UltraRay with core dependencies only
pip install -e .# Install UltraRay with CUDA 12.x support
pip install -e ".[cuda]"# Install UltraRay with CUDA 11.x support
pip install -e ".[cuda11]"# Development tools + CUDA 12.x
pip install -e ".[dev,cuda]"
# Development tools + CUDA 11.x
pip install -e ".[dev,cuda11]"
# Development tools only (CPU-only)
pip install -e ".[dev]"Test that everything is working:
import mitsuba as mi
mi.set_variant('cuda_mono') # Use CUDA variant
import ultra_ray
print("UltraRay setup successful with GPU support!")import mitsuba as mi
mi.set_variant('llvm_mono') # Use CPU variant
import ultra_ray
print("UltraRay setup successful with CPU support!")Scene data to reproduce the B-Mode simulations from the paper can be downloaded from here.
Download the data and place it in the scenes/ directory to run the examples.
To reproduce the ultrasound simulation results from the paper, run the following commands:
python src/main.py --config src/configs/cylinder_convex.txtpython src/main.py --config src/configs/vertebrae_convex.txtThe simulations will generate:
- Raw RF data from the ray tracing process
- Beamformed B-mode ultrasound images saved to the
results/directory - Execution time benchmarks for each simulation stage
If you prefer not to run the setpath script each time, you can add the Mitsuba Python path directly:
import sys
sys.path.append("external/mitsuba/build/python")
import mitsuba as mi
mi.set_variant('cuda_mono')- This project uses a modified version of Mitsuba 3 with ultrasound-specific features
- Make sure your CUDA drivers are up-to-date
- Compilation time scales with the number of enabled variants
- Generated beamformed images are saved to the
results/directory - Performance Note: Due to Mitsuba's warm-up phase where the symbolic loop is recorded during first execution, the initial simulation run will be slower. Subsequent executions of the same simulation will be significantly faster as the compiled kernels are reused
Parts of the code are from the mitransient implementation.

