Skip to content

iSEE-Laboratory/VIPerson

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 

Repository files navigation

VIPerson: Flexibly Generating Virtual Identity for Person Re-Identification

This is the official PyTorch implementation for our ICCV 2025 paper: "VIPerson: Flexibly Generating Virtual Identity for Person Re-Identification". We propose a novel diffusion-based pipeline to synthesize realistic and diverse pedestrian images for Person Re-identification (ReID).

If you find our work useful, please consider giving a star ⭐️!


πŸ“’ News

  • [Nov 2025] The generated VIPerson and pre-trained ReID models are now available. Enjoy it! πŸš€
  • [June 2025] πŸŽ‰ Our paper has been accepted by ICCV 2025!

πŸ“ Abstract

Person re-identification (ReID) is to match the person images under different camera views. Training ReID models necessitates a substantial amount of labeled real-world data, leading to high labeling costs and privacy issues. Although several ReID data synthetic methods are proposed to address these issues, they fail to generate images with new identities or real-world camera style. In this paper, we propose a novel pedestrian generation pipeline, VIPerson, to generate camera-realistic pedestrian images with flexible Virtual Identities for the Person ReID task. VIPerson focuses on three key factors in data synthesis:

  • πŸšΆβ€β™€οΈ (I) Virtual identity diversity: Enhancing the latent diffusion model with our proposed dropout text embedding, we flexibly generate random and hard identities.
  • πŸ“Έ (II) Scalable cross-camera variations: VIPerson introduces scalable variations of scenes and poses within each identity.
  • 🎨 (III) Camera-realistic style: Adopting an identity-agnostic approach to transfer realistic styles, we avoid privacy exposure of real identities.

Extensive experimental results across a broad range of downstream ReID tasks demonstrate the superiority of our generated dataset over existing methods. In addition, VIPerson can be adapted to the privacy-constrained ReID scenario, which widens the application of our pipeline.

πŸ’Ύ Dataset

We generated a virtual pedestrian dataset for ReID using the VIPerson pipeline. You can download it from the following links:

The format of VIPerson dataset please refer to VIPerson.json(Access Code: 6cxf)

πŸ“¦ Pre-trained Models

We provide the model weights for easy reproduction and future research.

Model Download Link
VIPerson checkpoint Google Drive / Baidu Cloud(Access Code: qtxy)

βœ… TODO List

  • Release pre-trained models.
  • Release the VIPerson dataset.
  • Release inference code and identity generator.
  • Add training scripts for more downstream ReID models.

Citing VIPerson

If you use our code or dataset in your research, please consider citing our paper:

@inproceedings{zhang2025viperson,
  title={VIPerson: Flexibly Generating Virtual Identity for Person Re-Identification},
  author={Zhang, Xiao-Wen and Zhang, Delong and Peng, Yi-Xing and Ouyang, Zhi and Meng, Jingke and Zheng, Wei-Shi},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={23374--23384},
  year={2025}
}

Acknowledgements

Our implementation references the following outstanding projects. We thank them for their contributions to the open-source community.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •