Hi, and thank you for releasing the code for MoDGS — much appreciated!
I had a question regarding the metric scale evaluation described in the paper and code. Specifically, I would like to better understand how the depth prediction is scaled to match the scale of the poses provided in the poses_bounds.npy file. Could you please clarify:
- How is the scaling factor determined?
- How are the points selected for computing correspondences between the rendered image and the real one?
- Would it be possible for you to release the specific set of points used for scaling the DYNeRF dataset?
Thanks in advance for your time and help!