Skip to content

CLIMB-ReID: A Hybrid CLIP-Mamba Framework for Person Re-Identification(AAAI2025)

License

Notifications You must be signed in to change notification settings

AsuradaYuci/CLIMB-ReID

Repository files navigation

CLIMB-ReID: A Hybrid CLIP-Mamba Framework for Person Re-Identification(AAAI2025)

Framework

Recently, some works have suggested using large-scale pre-trained vision-language models like CLIP to boost ReID performance. Unfortunately, existing methods still struggle to address two key issues simultaneously: efficiently transferring the knowledge learned from CLIP and comprehensively extracting the context information from images or videos. To address above issues, we introduce CLIMB-ReID, a pioneering hybrid framework that synergizes the impressive power of CLIP with the remarkable computational efficiency of Mamba.

📢News

  • [2025/07/08] ‌I've improved the relevant repository. Happy graduation!‌!!

🔥 Highlight

  • We propose a novel framework named CLIMB-ReID for person ReID. To our best knowledge, this is the first use of Mamba to person ReID. We propose a Multi-Memory Collaboration strategy to efficiently transfer the knowledge from CLIP, which transfer the knowledge learned from CLIP to person ReID without text and prompt learning.

  • We propose a Multi-Temporal Mamba to capture multi-granular spatiotemporal information in videos. Extensive experiments demonstrate that our CLIMB-ReID shows superior performance over existing methods on three video-based person ReID datasets and two image-based ReID datasets.

📝 Results

  • Performance
* Pretrained Models

Wait a moment.

  • t-SNE Visualization

📑Installation

  • Install the conda environment
conda create -n CLIMB python=3.8
conda activate CLIMB
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch
  • Install the required packages:
pip install -r requirements.txt

For “selective_scan”:

git clone https://github.com/MzeroMiko/VMamba.git
cd VMamba
pip install -r requirements.txt
cd kernels/selective_scan && pip install .
  • Prepare Datasets
Download the datasets (MARS, LS-VID , iLIDS-VID, Market1501 and MSMT17), and then unzip them to your_dataset_dir.

🚗Run CLIMB-ReID

For example,if you want to run method on MARS, you need to modify the bottom of configs/vit_base.yml to

DATASETS:
   NAMES: ('MARS')
   ROOT_DIR: ('your_dataset_dir')
OUTPUT_DIR: 'your_output_dir'

Then, run

CUDA_VISIBLE_DEVICES=0 python train-main.py

🚗Evaluation

For example, if you want to test methods on MARS, run

CUDA_VISIBLE_DEVICES=0 python eval-main.py

♥️ Acknowledgment

This project is based on TF-CLIP and VMamba. Thanks for these excellent works.

♥️ Concat

If you have any questions, please feel free to send an email to yuchenyang@mail.dlut.edu.cn or asuradayuci@gmail.com. .^_^.

📖 Citation

If you find CLIMB-ReID useful for you, please consider citing 📣

@article{climb,
      Title={Climb-reid: A hybrid clip-mamba framework for person re-identification},
      Author = {Chenyang Yu, Xuehu Liu, Jiawen Zhu, Yuhao Wang, Pingping Zhang, Huchuan Lu},
      Volume={39},
      Number={9},
      Pages = {9589-9597},
      Year = {2025},
      booktitle= = {AAAI}
}

📖 LICENSE

CLIMB-ReID is released under the MIT License.

About

CLIMB-ReID: A Hybrid CLIP-Mamba Framework for Person Re-Identification(AAAI2025)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published