Skip to content

Details of quality drift evaluation #12

@y0urOy

Description

@y0urOy

Excellent work.

I would like to ask whether the Self-Forcing model reported in Table 1 of the paper was evaluated using the officially released checkpoint.

In my experiments, I used the official Self-Forcing weights:
https://huggingface.co/gdhe17/Self-Forcing/blob/main/checkpoints/self_forcing_dmd.pt

I evaluated the model on the same 200 MovieGen prompts, generating 30-second videos.
The results I obtained are:

Quality Drift: 4.76 (Different with 1.66 reported in the paper)
First 5s‘ Imaging Quality: 70.06
Last 5s’ Imaging Quality: 65.31

Could you please share more details about how Quality Drift is computed and evaluated in your Table1 experiments?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions