Skip to content

Trying to understand data pre-processing/neighborhood computation #40

@mackenzie-warren

Description

@mackenzie-warren

In Section 4 of the paper (https://arxiv.org/abs/1904.02375), there is a description of the neighborhood computation of {q}, by scoring points to ensure regions of the point clouds are sampled uniformly. However, I cannot find the corresponding part of the code. For example, looking at lines 112-114 in semantic3d_seg.py, it looks like the points are just sampled randomly without any weighting. Could you provide some clarification?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions