Skip to content

DachunKai/Ev-DeblurVSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

10 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Official Pytorch implementation for the "Event-Enhanced Blurry Video Super-Resolution" paper (AAAI 2025).

🌐 Project | πŸ“ƒ Paper

Authors: Dachun KaiπŸ“§οΈ, Yueyi Zhang, Jin Wang, Zeyu Xiao, Zhiwei Xiong, Xiaoyan Sun, University of Science and Technology of China

Feel free to ask questions. If our work helps, please don't hesitate to give us a ⭐!

πŸš€ News

  • 2025/04/17: Release pretrained models and test sets for quick testing
  • 2025/01/07: Video demos released
  • 2024/12/15: Initialize the repository
  • 2024/12/09: πŸŽ‰ πŸŽ‰ Our paper was accepted in AAAI'2025

πŸ”– Table of Contents

  1. Video Demos
  2. Code
  3. Citation
  4. Contact
  5. License and Acknowledgement

πŸ”₯ Video Demos

A $4\times$ blurry video upsampling results on the synthetic dataset GoPro and real-world dataset NCER test sets.

NCER_traffic_sign.mp4
NCER_building.mp4
GoPro_car.mp4
GoPro_park.mp4

Code

Installation

  • Dependencies: Miniconda, CUDA Toolkit 11.1.1, torch 1.10.2+cu111, and torchvision 0.11.3+cu111.

  • Run in Conda (Recommended)

    conda create -y -n ev-deblurvsr python=3.7
    conda activate ev-deblurvsr
    pip install torch-1.10.2+cu111-cp37-cp37m-linux_x86_64.whl
    pip install torchvision-0.11.3+cu111-cp37-cp37m-linux_x86_64.whl
    git clone https://github.com/DachunKai/Ev-DeblurVSR
    cd Ev-DeblurVSR && pip install -r requirements.txt && python setup.py develop
  • Run in Docker πŸ‘

    Note: We use the same docker image as our previous work EvTexture.

    [Option 1] Directly pull the published Docker image we have provided from Alibaba Cloud.

    docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest

    [Option 2] We also provide a Dockerfile that you can use to build the image yourself.

    cd EvTexture && docker build -t evtexture ./docker

    The pulled or self-built Docker image contains a complete conda environment named evtexture. After running the image, you can mount your data and operate within this environment.

    source activate evtexture && cd EvTexture && python setup.py develop

Test

  1. Download the pretrained models from (Releases / Baidu Cloud (n8hg)) and place them to experiments/pretrained_models/EvDeblurVSR/. The network architecture code is in evdeblurvsr_arch.py.

    • Synthetic dataset model:
      • EvDeblurVSR_GOPRO_BIx4.pth: trained on GoPro dataset with Blur-Sharp pairs and BI degradation for $4\times$ SR scale.
      • EvDeblurVSR_BSD_BIx4.pth: trained on BSD dataset with Blur-Sharp pairs and BI degradation for $4\times$ SR scale.
    • Real-world dataset model:
      • EvDeblurVSR_NCER_BIx4.pth: trained on NCER dataset with Blur-Sharp pairs and BI degradation for $4\times$ SR scale.
  2. Download the preprocessed test sets (including events) for GoPro, BSD, and NCER from (Baidu Cloud (n8hg) / Google Drive), and place them to datasets/.

    • GoPro_h5: HDF5 files containing preprocessed test datasets for the GoPro test set.

    • BSD_h5: HDF5 files containing preprocessed test datasets for the BSD dataset.

    • NCER_h5: HDF5 files containing preprocessed test datasets for the NCER dataset.

  3. Run the following command:

    • Test on GoPro for 4x Blurry VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/EvDeblurVSR/test_EvDeblurVSR_GoPro_x4.yml
    • Test on BSD for 4x Blurry VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/EvDeblurVSR/test_EvDeblurVSR_BSD_x4.yml
    • Test on NCER for 4x Blurry VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/EvDeblurVSR/test_EvDeblurVSR_NCER_x4.yml

    This will generate the inference results in results/. The output results on GoPro, BSD and NCER datasets can be downloaded from (Releases / Baidu Cloud (n8hg)).

  4. Test the number of parameters, runtime, and FLOPs:

    python test_scripts/test_params_runtime.py

Input Data Structure

  • Both video and event data are required as input, as shown in the snippet. We package each video and its event data into an HDF5 file.

  • Example: The structure of GOPR0384_11_00.h5 file from the GoPro dataset is shown below.

    GOPR0384_11_00.h5
    β”œβ”€β”€ images
    β”‚   β”œβ”€β”€ 000000 # frame, ndarray, [H, W, C]
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ vFwd
    β”‚   β”œβ”€β”€ 000000 # inter-frame forward event voxel, ndarray, [Bins, H, W]
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ vBwd
    β”‚   β”œβ”€β”€ 000000 # inter-frame backward event voxel, ndarray, [Bins, H, W]
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ vExpo
    β”‚   β”œβ”€β”€ 000000 # intra-frame exposure event voxel, ndarray, [Bins, H, W]
    β”‚   β”œβ”€β”€ ...
    

😊 Citation

If you find the code and pre-trained models useful for your research, please consider citing our paper. πŸ˜ƒ

@inproceedings{kai2025event,
  title={Event-{E}nhanced {B}lurry {V}ideo {S}uper-{R}esolution},
  author={Kai, Dachun and Zhang, Yueyi and Wang, Jin and Xiao, Zeyu and Xiong, Zhiwei and Sun, Xiaoyan},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={39},
  number={4},
  pages={4175--4183},
  year={2025}
}

Contact

If you meet any problems, please describe them in issues or contact:

License and Acknowledgement

This project is released under the Apache 2.0 License. Our work builds significantly upon our previous project EvTexture. We would also like to sincerely thank the developers of BasicSR, an open-source toolbox for image and video restoration tasks. Additionally, we appreciate the inspiration and code provided by BasicVSR++, RAFT and event_utils.

Packages

 
 
 

Contributors