First of all, thank you for releasing your impactful work!
I'm trying to train NRNeRF on multi-view data from 8 synchronized cameras with known intrinsics and extrinsics, and I ran into a couple questions regarding the bounds and the downsampling factor.
1. Are the parameters min_bound and max_bound defined as the minimum and maximum across all cameras?
I noticed that in the README.md, there is a single min_bound and max_bound that is shared between all cameras when specifying calibration.json, as opposed to there being one for each camera.
2. When using load_llff_data_multi_view, if our training images are downsampled from their original resolution by a certain factor, are there any parts of the calibration.json (i.e. camera intrinsics / extrinsics) we have to accordingly adjust to account for the downsampling factor?
I'm asking this question because that downsampling images by a factor is not implemented in load_llff_data_multi_view, but load_llff_data appears to be using factor in a couple of cases (https://github.com/yenchenlin/nerf-pytorch/blob/a15fd7cb363e93f933012fd1f1ad5395302f63a4/load_llff.py#L76, https://github.com/yenchenlin/nerf-pytorch/blob/a15fd7cb363e93f933012fd1f1ad5395302f63a4/load_llff.py#L103).
Thank you in advance for reading this long question.
I look forward to reading your response.
First of all, thank you for releasing your impactful work!
I'm trying to train NRNeRF on multi-view data from 8 synchronized cameras with known intrinsics and extrinsics, and I ran into a couple questions regarding the bounds and the downsampling factor.
1. Are the parameters
min_boundandmax_bounddefined as the minimum and maximum across all cameras?I noticed that in the README.md, there is a single
min_boundandmax_boundthat is shared between all cameras when specifyingcalibration.json, as opposed to there being one for each camera.2. When using
load_llff_data_multi_view, if our training images are downsampled from their original resolution by a certain factor, are there any parts of thecalibration.json(i.e. camera intrinsics / extrinsics) we have to accordingly adjust to account for the downsampling factor?I'm asking this question because that downsampling images by a
factoris not implemented inload_llff_data_multi_view, butload_llff_dataappears to be usingfactorin a couple of cases (https://github.com/yenchenlin/nerf-pytorch/blob/a15fd7cb363e93f933012fd1f1ad5395302f63a4/load_llff.py#L76, https://github.com/yenchenlin/nerf-pytorch/blob/a15fd7cb363e93f933012fd1f1ad5395302f63a4/load_llff.py#L103).Thank you in advance for reading this long question.
I look forward to reading your response.