Skip to content

Evaluated perplexity not equal to generated perplexity #3

@tony9664

Description

@tony9664

When I generate new SP sequences, a perplexity value will be provided for each sequence in the CSV file. However, when I tried to re-calculate the perplexity values for the new sequences using the run_perplexity.py script, I noticed that the re-calculated values are generally higher than the ones provided when generating the sequences. Is this an expected behavior? Since the paper considered the perplexity as an indicator of SP efficiency, which value should I trust more?

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions