The depth to color extrinsics stored by the realsense are not perfect. Although both depth imagers are on a single backplane, the color imager is on a different one. It is likely that we did not do enough calibration of them to each other. We could try to align the depth to color using surf or something similar + RANSAC. We could run on a couple dozen images from each sequence, generate extrinsics, and use them throughout.
The depth to color extrinsics stored by the realsense are not perfect. Although both depth imagers are on a single backplane, the color imager is on a different one. It is likely that we did not do enough calibration of them to each other. We could try to align the depth to color using surf or something similar + RANSAC. We could run on a couple dozen images from each sequence, generate extrinsics, and use them throughout.