Back to Search Start Over

FCL-GAN: A Lightweight and Real-Time Baseline for Unsupervised Blind Image Deblurring

Authors :
Zhao, Suiyi
Zhang, Zhao
Hong, Richang
Xu, Mingliang
Yang, Yi
Wang, Meng
Publication Year :
2022

Abstract

Blind image deblurring (BID) remains a challenging and significant task. Benefiting from the strong fitting ability of deep learning, paired data-driven supervised BID method has obtained great progress. However, paired data are usually synthesized by hand, and the realistic blurs are more complex than synthetic ones, which makes the supervised methods inept at modeling realistic blurs and hinders their real-world applications. As such, unsupervised deep BID method without paired data offers certain advantages, but current methods still suffer from some drawbacks, e.g., bulky model size, long inference time, and strict image resolution and domain requirements. In this paper, we propose a lightweight and real-time unsupervised BID baseline, termed Frequency-domain Contrastive Loss Constrained Lightweight CycleGAN (shortly, FCL-GAN), with attractive properties, i.e., no image domain limitation, no image resolution limitation, 25x lighter than SOTA, and 5x faster than SOTA. To guarantee the lightweight property and performance superiority, two new collaboration units called lightweight domain conversion unit(LDCU) and parameter-free frequency-domain contrastive unit(PFCU) are designed. LDCU mainly implements inter-domain conversion in lightweight manner. PFCU further explores the similarity measure, external difference and internal connection between the blurred domain and sharp domain images in frequency domain, without involving extra parameters. Extensive experiments on several image datasets demonstrate the effectiveness of our FCL-GAN in terms of performance, model size and reference time.<br />Comment: Please cite this work as: Suiyi Zhao, Zhao Zhang, Richang Hong, Mingliang Xu, Yi Yang and Meng Wang, "FCL-GAN: A Lightweight and Real-Time Baseline for Unsupervised Blind Image Deblurring," In: Proceedings of the 30th ACM International Conference on Multimedia (ACM MM), Lisbon, Portugal, June 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2204.07820
Document Type :
Working Paper