Skip to content

DachunKai/RetinexEVSR

Repository files navigation

Official PyTorch implementation for the "Seeing the Unseen: Zooming in the Dark with Event Cameras" paper (AAAI 2026).

Authors: Dachun Kai📧️, Zeyu Xiao, Huyue Zhu, Jiaxiao Wang, Yueyi Zhang, Xiaoyan Sun,

Affiliations: University of Science and Technology of China, National University of Singapore, Miromind AI

Feel free to ask questions. If our work helps, please don't hesitate to give us a ⭐!

🚀 News

  • 2026/01/05: Make repository public
  • 2025/11/17: Release pretrained models and test sets for quick testing
  • 2025/11/17: Video demos released
  • 2025/11/15: Initialize the repository
  • 2025/11/08: 🎉 🎉 Our paper was accepted in AAAI'2026

🔖 Table of Contents

  1. Code
  2. Citation
  3. Contact
  4. License and Acknowledgement

Code

Installation

  • Dependencies: Miniconda, CUDA Toolkit 11.1.1, torch 1.10.2+cu111, and torchvision 0.11.3+cu111.

  • Run in Conda (Recommended)

    conda create -y -n RetinexEVSR python=3.7
    conda activate RetinexEVSR
    pip install torch==1.10.2+cu111 torchvision==0.11.3+cu111 -f https://download.pytorch.org/whl/torch_stable.html 
    git clone https://github.com/DachunKai/RetinexEVSR
    cd RetinexEVSR && pip install -r requirements.txt && python setup.py develop
  • Run in Docker 👏

    Note: We use the same docker image as our previous work EvTexture.

    [Option 1] Directly pull the published Docker image we have provided from Alibaba Cloud.

    docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest

    [Option 2] We also provide a Dockerfile that you can use to build the image yourself.

    cd RetinexEVSR && docker build -t retinexevsr ./docker

    The pulled or self-built Docker image contains a complete conda environment. After running the image, you can mount your data and operate within this environment.

    source activate RetinexEVSR && cd RetinexEVSR && python setup.py develop

Test

  1. Download the pretrained models from (Releases / Google Drive / Baidu Cloud (n8hg)) and place them to experiments/pretrained_models/RetinexEVSR/. The network architecture code is in retinexevsr_arch.py.

    • Synthetic dataset models:
      • RetinexEVSR_SDSD_Indoor_BIx4.pth: trained on SDSD Indoor dataset.
      • RetinexEVSR_SDSD_Outdoor_BIx4.pth: trained on SDSD Outdoor dataset.
    • Real-world dataset model:
      • RetinexEVSR_SDE_Indoor_BIx4.pth: trained on SDE Indoor dataset.
      • RetinexEVSR_SDE_Outdoor_BIx4.pth: trained on SDE Outdoor dataset.
      • RetinexEVSR_RELED_BIx4.pth: trained on RELED dataset.
  2. Download the preprocessed test sets (including events) for SDSD, SDE, and RELED from (Google Drive / Baidu Cloud (n8hg)), and place them to datasets/.

    • SDSD: HDF5 files containing preprocessed test datasets for SDSD_Indoor and SDSD_Outdoor.
    • SDE: HDF5 files containing preprocessed test datasets for SDE_Indoor and SDE_Outdoor.
    • RELED: HDF5 files containing preprocessed test datasets for RELED.
  3. Run the following command: We use 8*4090 to test, which is explicitly quicker than two gpus.

    • Test on SDSD Indoor for 4x Low-Light VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/RetinexEVSR/test_RetinexEVSR_SDSD_IN_x4.yml
    • Test on SDSD Outdoor for 4x Low-Light VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/RetinexEVSR/test_RetinexEVSR_SDSD_OUT_x4.yml
    • Test on SDE Indoor for 4x Low-Light VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/RetinexEVSR/test_RetinexEVSR_SDE_IN_x4.yml
    • Test on SDE Outdoor for 4x Low-Light VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/RetinexEVSR/test_RetinexEVSR_SDE_OUT_x4.yml
    • Test on RELED for 4x Low-Light VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/RetinexEVSR/test_RetinexEVSR_RELED_x4.yml

    This will generate the inference results in results/. The output results on SDSD, SDE and RELED datasets can be downloaded from (Releases / Google Drive / Baidu Cloud (n8hg)).

  4. Test the number of parameters, runtime, and FLOPs:

    python test_scripts/test_params_runtime.py

Input Data Structure

  • Both video and event data are required as input. We package each video and its event data into an HDF5 file.

  • The Low-Light (LR) HDF5 file contains images and voxels.

  • The Ground-Truth (GT) HDF5 file contains images.

  • Example: The structure of an HDF5 file:

    ClipName.h5
    ├── images
    │   ├── 000000 # frame, ndarray, [H, W, C]
    │   ├── ...
    ├── voxels
    │   ├── 000000 # event voxel, ndarray, [Bins, H, W]
    │   ├── ...
    

Citation

If you find our work useful for your research, please consider citing:

@inproceedings{kai2026seeing,
  title={Seeing the Unseen: Zooming in the Dark with Event Cameras},
  author={Kai, Dachun and Xiao, Zeyu and Zhu, Huyue and Wang, Jiaxiao and Zhang, Yueyi and Sun, Xiaoyan},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2026}
}

Contact

If you have any questions, please contact Dachun Kai.

License and Acknowledgement

This project is released under the Apache 2.0 License. Our work builds significantly upon our previous project EvTexture and Ev-DeblurVSR. We would also like to sincerely thank the developers of BasicSR, an open-source toolbox for image and video restoration tasks. Additionally, we appreciate the inspiration and code provided by event_utils.