Skip to content

Question about evaluate on KIT-ML #16

@fyyakaxyy

Description

@fyyakaxyy

Hello, thank you for such a great project. I have a question when reading "Table 2: APE and AVE benchmark on the KIT dataset":
The evaluation results of Lin et al. (2018), Language2Pose, Ghosh et al. (2021) and TEMOS are exactly the same as those in the TEMOS paper. However, the comparative experiment in paper is the average of 3 times.
image

How can this be compared with other papers? Is it fair?
In addition, there is no pre-training model or training and evaluation code for KIT-ML in the project. How can I reproduce your project on KIT-ML?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions