-
Notifications
You must be signed in to change notification settings - Fork 56
About Depth to Disparity #29
Description
Hello author,
I have some questions regarding the depth-to-disparity conversion in the depth_splat.py script. The standard formula for converting depth to disparity is typically given as:
However, in the implementation, it appears that the camera's internal parameters (such as focal length and baseline) are not explicitly defined.
disp_map = torch.from_numpy(batch_depth).unsqueeze(1).float().cuda()
disp_map = disp_map * 2.0 - 1.0
disp_map = disp_map * max_disp
Could you clarify why these parameters are not explicitly included in the code? Is it because they are implicitly handled elsewhere, or are there assumptions about the camera setup that allow for this simplification?
Additionally, I noticed that the weight map in the depth splitting function is computed as:
weights_map=(1.414) ** weights_map
This operation seems to scale the weight map by a factor of