Skip to content

Adaptation for large point clouds #1

@RauchLukas

Description

@RauchLukas

Hi guys,

First of all, many thanks for the great work. The results are outstanding and can theoretically be transferred perfectly for use in indoor building scans. In practice, however, the inference is much too slow to be really useful. The bottleneck is the get_sample of the shape encoder, which has to calculate the norm of the entire point cloud for the generation of each individual batch.

dist = np.linalg.norm(pts, axis=1)

I have to work with several hundred point clouds, all of which consist of over 10 million points. The duration of the batch generation in the Dataloader thus increases immeasurably.

Perhaps I could perform the normalization lazily without calculating the entire distance vector. But in the end, I need the distance vector for the stochastic sampling at the latest.

Obviously my problem does not directly concern you and your work. But perhaps you have gained experience over time on how to transfer the model to larger point clouds without experiencing any significant loss of performance.

I am very thankful for every tip and trick you might want to share.

Thanks,
Lukas

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions