-
Notifications
You must be signed in to change notification settings - Fork 90
Open
Description
prior = torch.ones(args.num_class)/args.num_class
prior = prior.cuda()
pred_mean = torch.softmax(logits, dim=1).mean(0)
penalty = torch.sum(prior*torch.log(prior/pred_mean))
entropy=p*log(p) why not penalty = torch.sum(pred_mean*torch.log(prior/pred_mean))
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels