-
NetVLAD/NetVLAD+LSTM
We develop the testbench based on Nanne/pytorch-NetVlad.
train NetVLAD/NetVLAD+LSTM
# NetVLAD cd autoplace python train.py --nEpochs=40 --output_dim=9216 --seqLen=1 --encoder_dim=512 --num_clusters=18 --net=netvlad --split=val --logsPath=logs_netvlad --cGPU=0 --imgDir='dataset/7n5s_xy11/img' # NetVLAD+LSTM cd autoplace python train.py --nEpochs=40 --output_dim=4096 --seqLen=3 --encoder_dim=512 --num_clusters=18 --net=netvlad --split=val --logsPath=logs_netvlad --cGPU=0 --imgDir='dataset/7n5s_xy11/img'
evaluate NetVLAD/NetVLAD+LSTM
cd autoplace python train.py --mode='evaluate' --cGPU=0 --split=test --resume=[log_folder]
calculate Recall@N and Precision-Recall: modify the path
netvladinautoplace/postprocess/parse/resume_path.jsonto [netvlad_log_folder], and run the command:cd autoplace/postprocess/parse python parse.py --model=netvlad -
SeqNet
We develop the testbench based on oravus/SeqNet. We use the pretrained NetVLAD as the black-box feature encoder to encode images to 9216D vectors, which is then downscaled to 4096D using PCA: modify the path
netvladinpostprocess/parse/resume_path.jsonto [netvlad_log_folder] and run the command:cd autoplace/postprocess/parse python parse_seqnet.py --phase='generate_features'
use the stored 4096D vectors to train S3-SeqNet and S1-SeqNet (it can be done in parallel):
# train S3-SeqNet cd autoplace python train_seqnet.py --split=val --cGPU=0 # train S1-SeqNet cd autoplace python train_seqnet.py --split=val --seqLen=1 --w=1 --cGPU=0
evaluate
# evaluate S3-SeqNet or S1-SeqNet cd autoplace python train_seqnet.py --mode='evaluate' --cGPU=0 --split=test --resume=[S3-SeqNet or S1-SeqNet training log folder]
generate detailed results: modify the path
s3_seqnetands1_seqnetinpostprocess/parse_logs/resume_path.jsonto their [log_folder] and run the command:cd autoplace/postprocess/parse python parse_seqnet.py --phase='match'
-
MinkLoc3D
refer to minkloc3d_AutoPlace to generate
minkloc3d_features.picklethen copy
minkloc3d_features.pickletoautoplace/postprocess/parse/resultsrun the command:
cd autoplace/postprocess/parse python parse.py --model='minkloc3d'
-
ScanContext
We develop the testbench based on irapkaist/scancontext.
cd autoplace/postprocess/parse python parse.py --model='scancontext'
-
M2DP
We develop the testbench based on M2DP python.
cd autoplace/postprocess python parse.py --model='m2dp'
-
Under The Radar
refer to utr_AutoPlace repo to generate
utr_features.pickle.copy
utr_features.pickleinautoplace/postprocess/parse/resultsrun the command:
cd autoplace/postprocess/parse python parse.py --model='utr'
-
Kidnapped Radar
convert Cartesian images to Polar images
cd autoplace/preprocess python cart_to_polar.py --dataset='../dataset/7n5s_xy11'
train
cd autoplace python train_kid.py --nEpochs=80 --output_dim=32768 --seqLen=1 --encoder_dim=512 --num_clusters=64 --net=kid --logsPath=logs_kid --split=val --cGPU=0 --imgDir='dataset/7n5s_xy11/img_polar'
evaluate
cd autoplace python train_kid.py --mode='evaluate' --cGPU=0 --split=test --resume=[training log folder]
generate detailed results
cd autoplace/postprocess/parse python parse.py --model='kidnapped'
After training & evaluating the above competing methods, [methods_name]_result.pickle files should be in the autoplace/postprocess/parse/results folder.
To generate (1) Reall@N curve (2) PR curve, (3) F1 Score, (4) Average Pecision, run
cd autoplace/postprocess/vis
python competing_figure.py
python competing_scores.py