-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Hi guys,
First of all, many thanks for the great work. The results are outstanding and can theoretically be transferred perfectly for use in indoor building scans. In practice, however, the inference is much too slow to be really useful. The bottleneck is the get_sample of the shape encoder, which has to calculate the norm of the entire point cloud for the generation of each individual batch.
Line 277 in 828b7a6
| dist = np.linalg.norm(pts, axis=1) |
I have to work with several hundred point clouds, all of which consist of over 10 million points. The duration of the batch generation in the Dataloader thus increases immeasurably.
Perhaps I could perform the normalization lazily without calculating the entire distance vector. But in the end, I need the distance vector for the stochastic sampling at the latest.
Obviously my problem does not directly concern you and your work. But perhaps you have gained experience over time on how to transfer the model to larger point clouds without experiencing any significant loss of performance.
I am very thankful for every tip and trick you might want to share.
Thanks,
Lukas