Skip to content

About smoothness_loss #6

@kkennethwu

Description

@kkennethwu

Hi Author, thanks for the great work!

While running Stage 1 on my self-captured dataset, I occasionally encountered an issue where the shapes of flow and neighbor_flow are inconsistent, which causes the calculation of smoothness_loss to fail.

    def get_local_smoothness_loss(self,pcd,flow,index=None,neighbor_K=10,loss_type="l2"):
        if index is None:
            pairwise_dist = knn_points(pcd.unsqueeze(0), pcd.unsqueeze(0), K=neighbor_K, return_sorted=False)
            index = pairwise_dist.idx
        # TODO: flow & neighbor_flows shape may be different
        if pcd.shape[0]!=index.shape[1]:
            print("######### Error #########")
            print(pcd.shape)
            print(index.shape)    
            raise ValueError(f"pcd.shape[0]!=index.shape[1], {pcd.shape[0]} != {index.shape[1]}")
            
        neighbor_flows = knn_gather(flow.unsqueeze(0), index)#neighbor_K)
        neighbor_flows=neighbor_flows[:,:,1:,:] ## remove the first point which is the point itself
        if loss_type=="l1":
            loss = torch.mean(torch.abs(flow.unsqueeze(0).unsqueeze(2)-neighbor_flows))
        else:
            loss = torch.mean(torch.square(flow.unsqueeze(0).unsqueeze(2)-neighbor_flows))   
            # loss = torch.mean(torch.square(flow.unsqueeze(0).unsqueeze(2)-neighbor_flows))
        return {"loss":loss,"index":index}

Do you have any idea what might be causing this? Or is it something minor that I can safely catch and skip?

Here’s the error message:

Traceback (most recent call last):
  File "train_PointTrackGS.py", line 867, in <module>
    train_table_completion_NIT(args)
  File "train_PointTrackGS.py", line 174, in train_table_completion_NIT
    loss = trainer.train_exhautive_one_step(iteration,data)
  File "/project3/kkennethwu/mannequin/MoDGS/model/neuralsceneflowprior.py", line 449, in train_exhautive_one_step
    dic= self.get_local_smoothness_loss(pcd,flow.squeeze(0),index,self.args.neighbor_K)
  File "/project3/kkennethwu/mannequin/MoDGS/model/neuralsceneflowprior.py", line 400, in get_local_smoothness_loss
    loss = torch.mean(torch.square(flow.unsqueeze(0).unsqueeze(2)-neighbor_flows))   
RuntimeError: The size of tensor a (139784) must match the size of tensor b (139740) at non-singleton dimension 1

BTW, in the preprocessing step, the static mask for my self-captured dataset ends up being all False, even when I set the threshold to a very large value. So I ended up using all the depth value to align.

Could this be caused by my modifications?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions