-
Notifications
You must be signed in to change notification settings - Fork 136
Open
Description
Hi,
First of all, thanks for the great work and for sharing this project.
I'm fine-tuning AdaPoinTr (trained on the PCN dataset) on my own custom dataset. Before opening this issue, I carefully read through most of the existing ones but couldn't find a satisfying answer to my concern.
Here’s what I’m observing:
- During fine-tuning, the scale of the loss looks different. For example, in dense the loss starts around 28, while in another it starts closer to 4 (considering the loss is multiplied by 1000). Is such variation expected or natural?
- Also, I noticed a gap between the dense training loss and the dense test loss. Surprisingly, the test loss is lower than the train loss. I saw a few issues discussing similar results, but I’m still not convinced why this would happen.
Could you please clarify:
- Is this behavior normal or a red flag?
- If this is expected, what’s the reasoning behind it?
- How can I reduce the dense test error further? Would adding weighting terms to the loss help?
As a sanity check, I also fine-tuned AdaPoinTr on the original PCN dataset, and I observed similar results there too.
Any insights or suggestions would be greatly appreciated!
Here is the results for my costume dataset
and this is the results of fine-tuning over pcn datasset

Metadata
Metadata
Assignees
Labels
No labels