Skip to content

question about penalty #47

@hhhhnwl

Description

@hhhhnwl

prior = torch.ones(args.num_class)/args.num_class
prior = prior.cuda()
pred_mean = torch.softmax(logits, dim=1).mean(0)
penalty = torch.sum(prior*torch.log(prior/pred_mean))

entropy=p*log(p) why not penalty = torch.sum(pred_mean*torch.log(prior/pred_mean))

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions