This project is based on the following paper
@InProceedings{Sarmad_2019_CVPR,
author = {Sarmad, Muhammad and Lee, Hyunjoo Jenny and Kim, Young Min},
title = {RL-GAN-Net: A Reinforcement Learning Agent Controlled GAN Network for Real-Time Point Cloud Shape Completion},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
Link for the original paper: https://arxiv.org/abs/1904.12304
Refer to the report "Reinforcement_learning_for_point_cloud_completion.pdf" in this repo for detailed explanation.
Requirements:
conda create -n <env_name> --file requirements_conda.txt python=3.6pip install -r requirements_pip.txtconda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=XX.X -c pytorch(XX.X: cuda version)mkdir data && ln -s <directory of train,test> shape_net_core_uniform_samples_2048_split
Steps:
- Download data from https://github.com/optas/latent_3d_points.
- Process Data with
Processdata2.mto get complete point cloud (not incomplete!!) - Train the autoencoder using main.py and save the model
- link data paths (train, test). see #TODO
- open visdom server with port 8102
python -m visdom.server -port 8102
- Generate GFV using pretrained AE using GFVgen.py and store data
- link pretrained model & train data path. see #TODO
- Train GAN on the generated GFV data by by going into the GAN folder (trainer.py) and save model
- Train RL by using pre-trained GAN and AE by running trainRL.py
- First, process data with
Processdata.mto get incomplete point cloud - link data paths (incomplete training dataset). see #TODO in
RL_params.py
- First, process data with
- Test with Incomplete data by running testRL.py
- link pretrained RL network paths
Credits: