Back to Search Start Over

Recurrent RLCN-Guided Attention Network for Single Image Deraining

Authors :
Masatoshi Okutomi
Yusuke Monno
Yizhou Li
Source :
MVA
Publication Year :
2021
Publisher :
IEEE, 2021.

Abstract

Single image deraining is an important yet challenging task due to the ill-posed nature of the problem to derive the rain-free clean image from a rainy image. In this paper, we propose Recurrent RLCN-Guided Attention Network (RRANet) for single image deraining. Our main technical contributions lie in threefold: (i) We propose rectified local contrast normalization (RLCN) to apply to the input rainy image to effectively mark candidates of rain regions. (ii) We propose RLCN-guided attention module (RLCN-GAM) to learn an effective attention map for the deraining without the necessity of ground-truth rain masks. (iii) We incorporate RLCN-GAM into a recurrent neural network to progressively derive the rainy-to-clean image mapping. The quantitative and qualitative evaluations using representative deraining benchmark datasets demonstrate that our proposed RRANet outperforms existing state-of-the-art deraining methods, where it is particularly noteworthy that our method clearly achieves the best performance on a realworld dataset.

Details

Database :
OpenAIRE
Journal :
2021 17th International Conference on Machine Vision and Applications (MVA)
Accession number :
edsair.doi...........1af2dafe682a6069b363d6cc304963db