PyTorch implementation for the paper "Disentangling Multi-view Representations via Curriculum Learning with Learnable Prior" (IJCAI 2025)
- python == 3.10.15
- torch == 2.1.0
- torchvision == 0.16.0
- scikit-learn == 1.5.2
- scipy == 1.14.1
We also export our conda virtual environment as CL2P.yaml. You can use the following command to create the environment.
conda env create -f CL2P.yamlYou could find the Office-31 dataset we used in the paper from Baidu Netdisk, and the pre-trained models from Baidu Netdisk.
To train the model, use the following command:
python train.py -f configs/Edge-MNIST.yamlThis will start the training process using the configuration specified in configs/Edge-MNIST.yaml.
To test the trained model, use the following command:
python test.py -f configs/Edge-MNIST.yamlThis will load the trained model and test it using the configuration specified in configs/Edge-MNIST.yaml.
If you find CL2P useful in your research, please consider citing:
@inproceedings{guo2025cl2p,
title={Disentangling Multi-view Representations via Curriculum Learning with Learnable Prior},
author={Guo, Kai and Wang, Jiedong and Peng, Xi and Hu, Peng and Wang, Hao},
journal={Proceedings of the 34th International Joint Conference on Artificial Intelligence},
year={2025}
}The codes are based on MRDD. Thanks to the authors for their codes!
