Skip to content

I got diifferent results on the evaluate set. #12

@BinBCheng

Description

@BinBCheng

Hello, thanks for your wonderful work. I got the pretrained model and I ran the train.py to reproduce the dynamic-multiframe-depth, but I got different results from yours(resnet18-pretrained).

Paper
Abs_rel / Sq_rel / rmse / rmse_log / a1 / a2 / a3
0.043 0.151 2.113 0.073 0.975 0.996 0.999

Own
Abs_rel / Sq_rel / rmse / rmse_log / a1 / a2 / a3
0.126 0.893 4.552 0.19 0.833 0.94 0.981

And my torch==1.10.1+cu113,torchvision==0.11.2+cu113.

All indicators are quite different from those in the paper. I did not make changes in “trian_my_resnet18.json”, just replaced “n_gpus=8” with “n_gpus=3”.

  1. I want to know why my result is not good.
  2. In addition to the settings in “trian_my_resnet18.json”, what details do I need to pay attention to in order to reproduce the result.

Looking forward to your reply, thank you.
Best wishes!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions