-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
Dear author,
On #7 you said that when you changed PST=3.0, the performance gets better on CUB-VGG. You provided the weight CUB_VGG16BN_O3 and raw branch. I looked through PsyNet-raw-vgg-16 branch and find its post processing is different with the main branch. In raw branch, it is:
attmap1 = attmap[-1]
attmap2 = attmap[-2]
attmap = norm_att_map(attmap1)
a = torch.mean(attmap, dim=(1, 2), keepdim=True)
attmap = (attmap > a).float()
attmap2 = norm_att_map(attmap2)
a2 = torch.mean(attmap2, dim=(1, 2), keepdim=True)
attmap2 = (attmap2 > a2).float()#做了个阈值筛选
attmap = F.interpolate(attmap.unsqueeze(dim=1), (attmap2.size(1), attmap2.size(2)), mode='nearest').squeeze()
attmap = attmap2 * attmap
However in main branch, the post processing is:
attmap = attmap[-1]
attmap = norm_att_map(attmap)
I have two questions:
(1) Should I run CUB_VGG16BN_O3 with PsyNet-raw-vgg-16 branch. I tried but the accuracy is only 74.99%, which is much lower than you mentioned on #7
(2)Why those two post processings are different since only a hyper-parameter PST is changed
Looking forward to your reply, and hope some responses about #8 if convenience.
Metadata
Metadata
Assignees
Labels
No labels