TSGF-GAN proposes a new GAN-based multi-focus Image Fusion (MFIF) model leveraging a trainable guided filter module to improve fusion quality by predicting more accurate focus maps. The self-guided adaptive filtering enhances predicted focus maps and succeeds in superior multi-focus fusion results. The proposed approach outperforms existing GAN-based MFIF methods and achieves highly competitive performance with state-of-the-art methods.
For a comprehensive understanding and deeper insights, we invite you to explore the paper.
TSGF-GAN is coded with PyTorch.
It requires the following installations:
python 3.8.3
pytorch (1.7.1)
cuda 11.1
Given a dataset root path in which there are folders containing input multi-focus images and corresponding all-in-focus images, you can train your own model.
We follow the MFIF-GAN to generate training data from Pascal VOC12 dataset.
You may find the test data under the datasets folder. Please refer to the related papers if you use them in your research.
M. Nejati, S. Samavi, S. Shirani, "Multi-focus Image Fusion Using Dictionary-Based Sparse Representation", Information Fusion, vol. 25, Sept. 2015, pp. 72-84.
Xu, S., Wei, X., Zhang, C., Liu, J., & Zhang, J. (2020). MFFW: A new dataset for multi-focus image fusion. arXiv preprint arXiv:2002.04780.
Zhang, H., Le, Z., Shao, Z., Xu, H., & Ma, J. (2021). MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Information Fusion, 66, 40-53.
You can train MiT-MFIF using the following script.
python main.py --root_traindata ./mfif_dataset/ --model_save_dir ./models/ --model_name mfif
You can test TSGF-GAN using the following script. You can reach the pre-trained model under the "model" directory.
python test.py --root_testdata ./datasets --test_dataset LytroDataset --root_result ./results --root_model ./models/ --model_name tsgf-gan_best
To evaluate the TSGF-GAN, we utilize the following Matlab implementations.
https://github.com/zhengliu6699/imageFusionMetrics
https://github.com/xytmhy/Evaluation-Metrics-for-Image-Fusion
In our code, some code pieces are adapted from the pix2pixHD.
We have included the results for three datasets (Lytro, MFFW, MFI-WHU) in the "results" folder.
Feel free to reach out to me with any questions regarding MiT-MFIF or to explore collaboration opportunities in solving diverse computer vision and image processing challenges. For additional details about my research please visit my personal webpage.
@ARTICLE{karacan2023tsgfgan,
author={Karacan, Levent},
journal={IEEE Access},
title={Trainable Self-Guided Filter for Multi-Focus Image Fusion},
year={2023},
volume={11},
number={},
pages={139466-139477},
doi={10.1109/ACCESS.2023.3335307}}

