Skip to content

finetune adapointr on PCN vs costume dataset #181

@m-kafiyan

Description

@m-kafiyan

Hi,
First of all, thanks for the great work and for sharing this project.

I'm fine-tuning AdaPoinTr (trained on the PCN dataset) on my own custom dataset. Before opening this issue, I carefully read through most of the existing ones but couldn't find a satisfying answer to my concern.

Here’s what I’m observing:

  • During fine-tuning, the scale of the loss looks different. For example, in dense the loss starts around 28, while in another it starts closer to 4 (considering the loss is multiplied by 1000). Is such variation expected or natural?
  • Also, I noticed a gap between the dense training loss and the dense test loss. Surprisingly, the test loss is lower than the train loss. I saw a few issues discussing similar results, but I’m still not convinced why this would happen.

Could you please clarify:

  1. Is this behavior normal or a red flag?
  2. If this is expected, what’s the reasoning behind it?
  3. How can I reduce the dense test error further? Would adding weighting terms to the loss help?

As a sanity check, I also fine-tuned AdaPoinTr on the original PCN dataset, and I observed similar results there too.

Any insights or suggestions would be greatly appreciated!

Here is the results for my costume dataset

Image

and this is the results of fine-tuning over pcn datasset

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions