Skip to content
/ SDF-DRO Public

[CVPR'25] "Toward Robust Neural Reconstruction from Sparse Point Sets"

Notifications You must be signed in to change notification settings

Ouasfi/SDF-DRO

Repository files navigation

Toward Robust Neural Reconstruction from Sparse Point Sets

arXiv Project Page

This repository contains the implementation of the CVPR 2025 paper Toward Robust Neural Reconstruction from Sparse Point Sets by Amine Ouasfi, Shubhendu Jena, Eric Marchand, Adnane Boukhayma.

Overview


This paper proposes a novel approach for unsupervised signed distance learning from sparse and noisy point clouds. The method learns to predict the signed distance function of a 3D shape from a sparse set of points, without requiring any supervision or prior knowledge of the scene.

Repository Structure


The repository is organized as follows:

  • models: contains the implementation of the neural network architecture used in the paper.
  • traniers: contains the implementation of our proposed method.
  • train.py: contains the training script.

Environment setup


The code is written in Python and requires the following dependencies:

 # Create conda environment
conda env create

# Activate it
conda activate sdf_dro

# Install pytorch 
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

Data


ShapeNet

We use a subset of the ShapeNet data as chosen by Neural Splines. This data is first preprocessed to be watertight as per the pipeline in the Occupancy Networks repository, who provide both the pipleline and the entire preprocessed dataset (73.4GB).

The Neural Spline split uses the first 20 shapes from the test set of 13 shape classes from ShapeNet.You can download the dataset (73.4 GB) by running the script from Occupancy Networks. After, you should have the dataset in data/ShapeNet folder.

Faust

The Faust Dataset can be downloaded from the official website . We followed the preprocessing steps outlined in Occupancy Networks repository. Specifically, we normalized the meshes to the unit cube and uniformly sampled 100,000 points with their corresponding normals for evaluation.

Surface Reconstruction Benchamark data

The Surface Reconstruction Benchmark (SRB) data is provided in the Deep Geometric Prior repository.

If you use this data in your research, make sure to cite the Deep Geometric Prior paper.

Training


To train the SDF network, run the following command:

python train.py sn_config.json

This will train the network using the configuration specified in config.json and store the trained model in the results directory.

Evaluation


To evaluate the trained model, run the following command:

python eval.py  sn_config.json

This will evaluate the model on the test set and store the results in the results directory.

Configuration


The configuration file configs/conf.conf contains the following parameters:

  • n_points: the number of points to sample from the point cloud.
  • sigma: the standard deviation of the noise added to the point cloud.
  • rho: controls the the strength of the entropic regularization.
  • lambda_wasserstain: controls how close the worst-case distribution Q′ is to the nominal distribution.
  • m_dro: The number of queries used to estimate the worst-case distribution.

Citation


If you use this code in your research, please cite the following paper:

@inproceedings{ouasfi2025toward,
  title={Toward robust neural reconstruction from sparse point sets},
  author={Ouasfi, Amine and Jena, Shubhendu and Marchand, Eric and Boukhayma, Adnane},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={6552--6562},
  year={2025}
}

About

[CVPR'25] "Toward Robust Neural Reconstruction from Sparse Point Sets"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published