Skip to content

About the results in model_zoo #2

@Tingji2419

Description

@Tingji2419

Thanks for your great work on the reproduciton of QLIB benchmark. But I have a little questions about the results.

First, about the metrics. The values in the table are much higher than those in the original QLIB benchmark (https://github.com/microsoft/qlib/tree/main/examples/benchmarks), e.g. the IC of Transformer model in Alpha360 dataset is 0.08938, which is greater than 0.0114 in the original QLIB benchmark.

Second, about fairness of the training process. Simple LSTMs and Transformers work better than more complex sota models such as Informer and PatchTST. Can you provide details and setup of the data preprocessing and training?

Looking forward to the reply.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions