Multimodal Deep Learning architectures that are robust to noisy and adversarial data.
This project contains code that was adapted from the following repositories:
- Geometric Multimodal Contrastive Representation Learning
- Adversarial Attacks with PyTorch
- Multimodal Variational Autoencoder
To setup the conda environment for the project, you just need to run the following commands in the main directory:
conda env create -f environment.yml
conda activate RGMCIn order to setup the project, just run the python script to download and prepare all datasets:
python download_datasets.pyThere are two different ways you can train and/or test models.
In order to begin a new experiment from the command line, you must choose the architecture, dataset and stage for the experiment:
python main.py exp --a <architecture> --d <dataset> --s <train_model||train_classifier||test_model||test_classifier>This will begin an experiment with the default hyper-parameters for the given architecture, dataset and stage, but you can also define the values you want for each hyper-parameter in the arguments (e.g., learning rate, batch size, number of epochs). For the full list of hyper-parameters you can tune, see.
You can also run several experiments in succession by reading a JSON file with a list of experimental configurations:
python main.py config --load_config <json_filepath>If instead you want to run multiple experiments with all possible hyper-parameter permutations, you can load the configurations json file with the --config_permute <json_filepath> option.
For example, to compare metrics for a DAE-based classifier on the MHD dataset given different standard deviation values for gaussian noise on the image modality, you just need to run the following command:
python main.py compare -a dae -d mhd -s test_classifier --pc noise_std --pp target_modality