FDRNet: A Frequency-decoupled and Retinex-guided Framework for Medical Image Enhancement
A Frequency-decoupled and Retinex-guided Framework for Medical Image Enhancement
Weicheng Liao, Yuhui Ma, Zan Chen, Yitian Zhao
Automated medical image analysis plays a critical role in disease screening and diagnosis. However, image quality degradation, such as blurring and uneven illumination hinders both clinical interpretation and computer-aided diagnostic performance. Most existing enhancement methods often overlook frequency-domain degradation patterns, leading to over-enhancement or loss of clinically relevant details. To address these limitations, we propose a novel frequency-aware medical image enhancement framework, termed FDRNet, comprising two key components in this paper: (1) a frequency-decoupled deblurring module with asymmetric channel integration, which combines global and local views using high- and low-frequency information to preserve fine and broad structural features, and (2) a Retinex-guided illumination compensation module with a multi-scale color preservation unit for accurate estimation and correction. Our framework further incorporates a dual-domain collaboration mechanism into each encoder-decoder block of the deblurring module, enabling joint learning of degradation representations in spatial and frequency domains. Extensive experiments on three medical image modalities, utilizing over seven public and clinical dataset demonstrate that our method surpasses both traditional and learning-based enhancement techniques. Evaluations on downstream clinical tasks, including vessel segmentation, polyp segmentation, disease diagnosis, and disease severity grading, further confirm significant improvements in task performance and clinical applicability.