Hypothetical_Tail_Embeddings.pdf
- Ensure miniconda3 is installed
- Source conda with
source $YOUR_PATH/miniconda3/bin/activate - If you are creating the environment for the first time, navigate to the KGE-LLM root directory run
conda env create -f environment.yml, if the environment is already created go to step 4 - Run
conda activate KGE
- To be able to run Llama models create a local .env file in the root directory of KGE-LLM
- In the .env file create a variable
HF_TOKEN = $YOUR_HF_TOKEN
- To run experiments run
python -m experiments.openai_experiment - Ensure the conda environment is active
- Ensure you have an OpenAI API Key entered in the code
- Ensure you are in the root directory of the repository
- To run finetuning run
python -m finetune.finetuned-minilm