Labeling 3D point clouds can be time-consuming and challenging, especially in complex forest environments. This repository offers a way to make the process more efficient and accessible by converting terrestrial LiDAR data into 2D spherical projection images, where each pixel represents scalar values like intensity, range, curvature, and roughness.
- Convert 3D point clouds into spherical projection images based on elevation and azimuth angles.
- Use familiar 2D annotation tools to segment objects (e.g., roots, stems, and canopy in forest scenes), reducing the complexity of direct 3D labeling.
- Map the labeled masks back to the 3D point cloud, maintaining structural accuracy while making annotation more manageable.
- A Practical Approach – Adapts a 2D workflow to help with a common challenge in 3D annotation.
- More Efficient Annotation – Speeds up the labeling process, especially for large-scale datasets.
- Supports 3D Scene Understanding – Helps generate training data for deep learning in natural, unstructured environments where labeled datasets are still limited.
While the test datasets focus on forest scenes—Harvard Forest and Palau Mangrove Roots—this method is versatile and can be applied to a wide range of terrestrial LiDAR survey applications. If you're working with TLS point clouds and looking for a more efficient and intuitive way to annotate them, this tool might be a useful addition to your workflow! 🤞
# create a new conda environment
conda create -n tls_env python=3.9
conda activate tls_env
# Install dependencies with pip
pip install open3d
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu121
pip install scikit-image
pip install laspy
pip install plyfile
pip install xarray
# To solve the open3d `Segmentation fault` issue.
pip install numpy==1.26.4
# Install torch-geometric related dependencies, suppose you have PyTorch CUDA version: 12.1 and PyTorch version: 2.4.1
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.4.1+cu121.html
pip install torch-sparse -f https://data.pyg.org/whl/torch-2.4.1+cu121.html
pip install torch-cluster -f https://data.pyg.org/whl/torch-2.4.1+cu121.html
pip install torch-spline-conv -f https://data.pyg.org/whl/torch-2.4.1+cu121.html
pip install torch-geometric
pip install seaborn
- Prepare input files, current supported input formats:
- ".txt" - from Palau Mongrove scans, derived from .gbl files using 'CBL_GBL_Processing_UMB.py', scalar fields apart from x/y/z:
-
zenith angle (0-135)
- Definition: the angle between the vertical line (directly above the observer) and an observed object in the sky. A zenith angle of 0° means the object is directly overhead. However, notice that LiDAR was set upside down for scanning the roots, so 0° means the point is right under the LiDAR, and 135° means the point that was 45° above the lidar.
-
azimuth angle (0-360°)
-
range1meter (0-R; usually R<100)
-
intensity (aka remission; 0-4000)
-
return number (1 or 2)
-
- ".las" - from Harvard Forest scans, scalar fields apart from x/y/z:
- intensity (aka remission; 0-4000)
- return number (1 or 2)
- ".bin" - from SemanticKitti dataset, scalar fields apart from x/y/z:
- intensity (aka remission; 0.00 - 0.99)
- ".txt" - from Palau Mongrove scans, derived from .gbl files using 'CBL_GBL_Processing_UMB.py', scalar fields apart from x/y/z:
- Calculate the spherical projection images from the point cloud:
- Change the
CONFIG_PATHin the config_loader.py to the desired config file, e.g.,3D_to_2D_config_mangrove_roots.json - Adjust the paths in the
3D_to_2D_config_mangrove_roots.jsonfile - run:
python run_3d_to_2d_pipeline.py
- Change the
- Manually label the 2D unwrapped images to get segmentation maps in RGB, e.g.,
seg_map_33_01.png.- refer to the
data_annotation_guidline.mdfor more details.
- refer to the
- Generate class ID based segmentaion map and then attach the class id and color to the point cloud:
- refer to the
seg_map_tools_readme.mdfor more details.
- refer to the
- Run the refine scripts to refine the segmentation maps:
python tools.refine_pcd_labels.py- This will refine labels in the 3D point cloud.
python tools.refine_2D_mask.py- This will refine the 2D segmentation maps based on the refined point cloud.
| Harvard Forest Intensity Map | Mangrove Roots Intensity Map |
|---|---|
![]() |
![]() |
| Harvard Forest Pseudo-RGB_Roughness-Intensity-Range | Mangrove Roots Pseudo-RGB_Roughness-Intensity-Range |
|---|---|
![]() |
![]() |
| Harvard Forest Segmentation Map | Mangrove Segmentation Map |
|---|---|
![]() |
![]() |
| Harvard Forest Grayscale Segmentation Map | Mangrove Grayscale Segmentation Map |
|---|---|
![]() |
![]() |
| Harvard Forest Segmented Point Cloud | Mangrove Segmented Point Cloud |
|---|---|
![]() |
![]() |










