Welcome to the final submission for the MNLP project! For this last submission, as you can read in the project description, you have 4 main goals:
- Finish training the four models detailed in the project description: DPO, MCQA, Quantized-MCQA, RAG-MCQA, optimizing their performance as well as you can, and submit them. (individual work, one model per member)
- Submit the code you used to train your models, including the training script for each model.
- Submit the training data used for each model.
- Write a final report (group work)
Note: Note that for this final submission, the models will be evaluated based on their performance.
The repo has 6 folders, 4 of which serve for you to submit all four deliverables:
_templatescontains the LaTeX template for your final report. You MUST use this template._testcontains scripts that run automated tests to validate that your submission is correctly formatted.model_configsshould be populated by you with the 4 model config YAML files, includingdpo_model.yaml,mcqa_model.yaml,quantized_model.yaml, andrag_model.yaml. Make sure you fill in the important information in each config file, and the information is exactly what is used for evaluating your models. You need to change<HF_USERNAME_team_member_X>to your Huggingface Hub username. Make sure that you have submitted your models to the correct Huggingface Hub repositories, adhering to the following name convention:
-
<HF_USERNAME_team_member_DPO>/MNLP_M3_dpo_model -
<HF_USERNAME_team_member_MCQA>/MNLP_M3_mcqa_model -
<HF_USERNAME_team_member_QUANTIZED>/MNLP_M3_quantized_model -
<HF_USERNAME_team_member_RAG>/MNLP_M3_rag_modelFor the team member responsible for the RAG model, make sure you have submitted the two additional deliverables specific to RAG:
-
<HF_USERNAME_team_member_RAG>/<RAG_DOCUMENT_REPO_NAME>, replace<RAG_DOCUMENT_REPO_NAME>with the actual Huggingface Hub repo name of your submitted RAG documents. -
<HF_USERNAME_team_member_RAG>/MNLP_M3_document_encoder
pdfshould be filled by you with your final report PDF (titled<YOUR-GROUP-NAME>.pdf). This directory should then have only one PDF.datacontainsdata_repo.json. In this file, you need to change<HF_USERNAME_team_member_X>to your Huggingface Hub username. Make sure that you have submitted the training data for your 4 models to the correct Huggingface Hub repositories, adhering to the following name convention:
<HF_USERNAME_team_member_DPO>/MNLP_M3_dpo_dataset<HF_USERNAME_team_member_MCQA>/MNLP_M3_mcqa_dataset<HF_USERNAME_team_member_QUANTIZED>/MNLP_M3_quantized_dataset<HF_USERNAME_team_member_RAG>/MNLP_M3_rag_dataset
- The
codemust contain the following:
-
Four Bash Training Scripts: You must provide four executable Bash scripts (.sh files) in the root of the
codedirectory. These scripts are essential for reproducing your results and obtaining models equivalent to those you submit.train_dpo.shtrain_mcqa.shtrain_quantized.shtrain_rag.sh
-
Four Corresponding Subfolders for Training Code: For each of the four models, you should create a dedicated subfolder within code/ to house its specific training code. These folders should have the following structure:
train_dpo/train_mcqa/train_quantized/train_rag/
By running these scripts we should be able to reproduce your training process and obtain models that are functionally equivalent to the ones you have developed and submitted.
The autograding tests run automatically with every commit to the repo. If you want to trigger them manually, follow the instructions from the previous milestone.
For the final submission, same as M2, we provide you with an evaluation suite to benchmark each of the four models. As we covered in the compute tutorial session, details about how we evaluate your models are listed in these slides: Evaluation Implementation Slides.
We provide you with a demo MCQA evaluation dataset and a demo DPO evaluation dataset on the Huggingface Hub:
Also, for the RAG part, here is a demo Huggingface repo for the RAG documents and a collection of pretrained huggingface embedding models you can start with:
Recall that you have to submit your model weights, RAG documents, and training data via Huggingface Hub. Make sure you
- Have a Huggingface account.
- Make all your submissions public on the Huggingface Hub.
Please take a look at the documents on how to upload your dataset (documents which are also a dataset) and upload your model weights. Note that you also have to push the tokenizers to the same model repository.
After you push your model weights and RAG documents to the correct Huggingface Hub repositories, make sure to test your models with the official evaluation suite in a fresh and clean environment (not the same environment you used for development).
If you got an error in the clean environment, you are responsible for debugging and correcting it.