Skip to content

Reproduction of precision and recall metrics #91

@see08-ai

Description

@see08-ai

Hello, we are using adm_in256_stats.npz to reproduce the precision and recall indicators of MAR-L, and the results are 0.51 precision and 0.60 recall, which are different from 0.81 and 0.60 reported in the paper. Could you describe how the precision and recall results reported in the paper are calculated?

The following is our calculation code, setting prc to true:
metrics_dict = torch_fidelity.calculate_metrics( input1=save_folder, input2=input2, fid_statistics_file=fid_statistics_file, cuda=True, isc=True, fid=True, kid=False, prc=True, verbose=False, )

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions