Hi!
The paper said " trained and tested PInet on the DBD5 and older DBD3 benchmarks using leave-one-out cross-validation". However, the DBD dataset in "https://www.dropbox.com/sh/qqi9op061mfxbmo/AADibYuDdMF4n2bDS3uqiEVha?dl=0 " is splited into "tt0" - "tt4" folders with "shuffled_train_file_list_l.json" and "shuffled_test_file_list_l.json" in them, which seems that the "training" and "testing" datasets have been splitted. Which strategy should be used to evaluate the model for the results shown in the paper, "leave-one-out cross-validation" or "following the data partitioning in provided dataset"?
Thank you so much!