Hi,
Thank you for sharing this excellent work on pretraining retrieval models!
I noticed that this repository includes an example of fine-tuning the model for BEIR:
https://github.com/ma787639046/bowdpr/blob/main/examples/finetune/beir/README.md
If I understand correctly, the BEIR results reported in the paper do not involve fine-tuning on the MS MARCO dataset. Is my understanding correct? If so, are the experimental results of the fine-tuned model available anywhere?
I look forward to your reply!
Hi,
Thank you for sharing this excellent work on pretraining retrieval models!
I noticed that this repository includes an example of fine-tuning the model for BEIR:
https://github.com/ma787639046/bowdpr/blob/main/examples/finetune/beir/README.md
If I understand correctly, the BEIR results reported in the paper do not involve fine-tuning on the MS MARCO dataset. Is my understanding correct? If so, are the experimental results of the fine-tuned model available anywhere?
I look forward to your reply!