Skip to content

XLearning-SCU/2026-CVPR-BML

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚀 2026-CVPR-BML

[CVPR 2026] Official PyTorch implementation of
Bootstrapping Multi-view Learning for Test-time Noisy Correspondence

CVPR2026 Paper HuggingFace SUN-R-D-T Task


✨ Overview

This repository provides the official code for BML under two practical settings:

  • Multi-view classification on feature-based .mat datasets
  • Multi-modal scene classification on SUN-R-D-T (RGB / Depth / Text)

BML is designed for test-time noisy correspondence, where cross-view/modal alignment can be corrupted at inference time.


📦 Data Preparation

1) Multi-view datasets

Download link: 📚 Google Drive

Place all .mat files under:

datasets/multi-view-datasets/
  100Leaves.mat
  3V_Fashion_MV.mat
  AwAfea.mat
  Caltech-5V.mat
  CCV.mat
  handwritten.mat
  LandUse_21.mat
  NUSWIDEOBJ.mat
  Scene15.mat
  YoutubeFace_sel.mat

Add your own multi-view dataset

To plug in a custom dataset, update load_multiviewdata(...) in multi_view.py.

Minimal template inside load_multiviewdata:

elif args.dataset_name == 'YourDatasetName':
    mat = sio.loadmat(args.dataset_path + args.dataset_name + '.mat')
    data_list.append(mat['X1'])   # [N, D1]
    data_list.append(mat['X2'])   # [N, D2]
    # ... add more views if needed
    labels = np.squeeze(mat['Y']).astype(np.int64)  # [N]

2) SUN-R-D-T dataset

Download link: 🤗 Hugging Face

Expected structure:

datasets/SUN-R-D-T/
  train.json
  test.json
  train/
  test/
  cache/

A cache will be auto-generated at datasets/SUN-R-D-T/cache/ for on-the-fly augmentation.

3) Pretrained weights

Name Path Link
ResNet18 weights/resnet/checkpoints/ PyTorch Link
BERT-base-uncased weights/google-bert/bert-base-uncased/ 🤗 Link

🏃 Quick Start

Multi-view experiments

python multi_view.py --dataset_name Scene15 --seeds 0 1 2 3 4 5 6 7 8 9

SUN-R-D-T experiments (RGB / Depth / Text)

python multi_modal.py --dataset_name SUN-R-D-T --seeds 0 1 2 3 4

📊 Output

The evaluation table reports performance from noise ratio 0.0 to 1.0 (step 0.1) for each seed, then aggregates with MEAN and STD.

  • Column 0.0: vanilla multi-view/modal classification (no test-time noisy correspondence)
  • Columns 0.1 ~ 1.0: robustness under increasing noisy correspondence

Example:

============================================================================================================
                                       Scene15 evaluated complete. ✅                                        
============================================================================================================
  Seed  |  0.0   |  0.1   |  0.2   |  0.3   |  0.4   |  0.5   |  0.6   |  0.7   |  0.8   |  0.9   |  1.0   |
------------------------------------------------------------------------------------------------------------
  ...
------------------------------------------------------------------------------------------------------------
  MEAN  | 83.92  | 82.90  | 81.95  | 80.78  | 79.63  | 79.25  | 77.68  | 76.33  | 75.42  | 74.41  | 74.04  |
  STD   |  1.39  |  1.42  |  1.34  |  1.18  |  1.64  |  1.53  |  1.34  |  1.61  |  1.06  |  1.94  |  1.10  |
============================================================================================================

Logs are saved to: logs/<dataset_name>/<timestamp>/train.log


📝 Citation

If you find BML useful, please consider citing our papers 📝 and starring us ⭐️!

@InProceedings{BML,
    author    = {He, Changhao and Xue, Di and Li, Shuxian and Hao, Yanji and Peng, Xi and Hu, Peng},
    title     = {Bootstrapping Multi-view Learning for Test-time Noisy Correspondence},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2026},
}

About

[CVPR 2026] Pytorch Code for the paper "Bootstrapping Multi-view Learning for Test-time Noisy Correspondence"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages