-
Notifications
You must be signed in to change notification settings - Fork 27
Open
Description
Thanks for your great work on the reproduciton of QLIB benchmark. But I have a little questions about the results.
First, about the metrics. The values in the table are much higher than those in the original QLIB benchmark (https://github.com/microsoft/qlib/tree/main/examples/benchmarks), e.g. the IC of Transformer model in Alpha360 dataset is 0.08938, which is greater than 0.0114 in the original QLIB benchmark.
Second, about fairness of the training process. Simple LSTMs and Transformers work better than more complex sota models such as Informer and PatchTST. Can you provide details and setup of the data preprocessing and training?
Looking forward to the reply.
Metadata
Metadata
Assignees
Labels
No labels