Skip to content

DylanY661/5524_project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Setup Instructions

First install the required dependencies using the following command:

pip install -r requirements.txt

Make sure the Python version used is >= 3.10.

It is also recommended to run the script on an OSC cluster.

Required Files

The embedding and checkpoint files are too large for GitHub and are hosted on Google Drive. Download and extract them to the directory before running the script.

Embeddings

Download: embeddings.zip

After downloading, extract the contents:

unzip embeddings.zip

The following files should be present after extraction:

Dataset File
ImageNet-1K imagenet_clip_embeddings.pt
ImageNet-1K imagenet_dino_embeddings.pt
ImageNet-R imagenet_r_clip_embeddings.pt
ImageNet-R imagenet_r_dino_embeddings.pt
CIFAR-100 cifar100_clip_embeddings.pt
CIFAR-100 cifar100_dino_embeddings.pt

Checkpoints

Download: checkpoints.zip

After downloading, extract the contents:

unzip checkpoints.zip

The following files should be present after extraction:

Model Checkpoint Path
Model 1 (MLP Fusion) checkpoints/mlp/best_model.pt
Model 1 (MLP Fusion) checkpoints/mlp/latest_checkpoint.pt
Model 2 (Joint Fusion) checkpoints/joint/best_model.pt
Model 2 (Joint Fusion) checkpoints/joint/latest_checkpoint.pt

Running Instructions

Option 1: OSC (Preferred)

If running on OSC, simply run the command:

sbatch ./main.sbatch

This will automatically submit a batch job to OSC to run the evaluation script.

The output will be directed to a file named results_{jobid}.txt, where jobid is the batch job ID assigned by OSC.

Option 2: Direct Execution

The script can also be run directly inside the terminal.

Make sure python >= 3.10.0 is being used, then run either:

python main.py

or

python3 main.py

depending on your OS.

Results should print to terminal.


Expected format and results are given in epxected_results.txt. The output fo the evaluation script should match.

Additional Arguments

Argument Short Type Default Description
--model -m string all Model to evaluate. Choices: advanced_1, advanced_2, all
--embeddings_dir -e string embeddings Directory containing precomputed embeddings
--checkpoint_dir -c string . Base directory for model checkpoints
--dataset -d string all Dataset to evaluate on. Choices: imagenet, imagenet-r, cifar100, all

Example Usage

Evaluate Model 1 (MLP Fusion) on all datasets:

python main.py --model advanced_1

Evaluate Model 2 (Joint Fusion) on ImageNet-1K only:

python main.py --model advanced_2 --dataset imagenet

Evaluate all models with custom directories:

python main.py --model all --embeddings_dir /path/to/embeddings --checkpoint_dir /path/to/checkpoints

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published