-
Notifications
You must be signed in to change notification settings - Fork 52
Description
Dear Authors,
Thanks for keeping your source code implementation public. I have a question regarding the evaluation of the DTU dataset since I am tring to reproduce your results in the paper.
I used your script data_format_from_neus.py to export the the transforms.json file which includes all the necessary metadata per image.
However, after I export the model using your implementation and visualizing the resulting mesh versus the reference point cloud it does not seem to be aligned properly due to (possibly) some transformation issue. I have not tested all models but this is comfirmed for at least 9 of them (dtu_scan122 seemed to be the only one I found working properly and outputing the paper result). Below I have an example image for this situtation and also the code I used to visualize the two models.
data_mesh = o3d.io.read_triangle_mesh(mesh)
# modify the vertices to match the scale of the camera
vtx = np.asarray(data_mesh.vertices)
params = np.load(os.path.join(args.dataset_dir, f"{some_path}/cameras_sphere.npz"))
scale_mat = params['scale_mat_0']
vtx_n = np.concatenate([vtx, np.ones((vtx.shape[0], 1))], axis=-1)
vtx_n = vtx_n @ scale_mat.T
data_mesh.vertices = o3d.utility.Vector3dVector(vtx_n[:,:3])
point_cloud= o3d.io.read_point_cloud(pc)
o3d.visualization.draw([mesh, point_cloud])
Is there a chance I am missing something here in the transformation process? Can you confirm this issue in your own data? If this is not the case for your script can you please provide an example that is properly working?
