Hi, thanks for the fantastic work:
However, when I use synthetic images to train an image to depth neural network, it works very bad on real-world images.
I found the synthetic images (as shown below) are significantly different from the real image, is this the reason leading to depth calculation failure? When I test the neural network on the synthetic images, it works fine. However, when I test real-world images it just predicts them as no-depth images (a full black image).
As shown in the synthetic images, the contact image of synthetic images looks more three-dimensional.


I am new to the image process area, therefore, I may miss some key pre-treatment processes for the real-world images to make them more like synthetic images.
Is my digit sensor not fabricated well?
Or is any pre-treatment needed for the real-world images to make them more like synthetic images?
Any advice from the community will be extremely helpful to me.