Back to Search Start Over

RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining

Authors :
Wu, Hongtao
Yang, Yijun
Xu, Huihui
Wang, Weiming
Zhou, Jinni
Zhu, Lei
Publication Year :
2024

Abstract

The outdoor vision systems are frequently contaminated by rain streaks and raindrops, which significantly degenerate the performance of visual tasks and multimedia applications. The nature of videos exhibits redundant temporal cues for rain removal with higher stability. Traditional video deraining methods heavily rely on optical flow estimation and kernel-based manners, which have a limited receptive field. Yet, transformer architectures, while enabling long-term dependencies, bring about a significant increase in computational complexity. Recently, the linear-complexity operator of the state space models (SSMs) has contrarily facilitated efficient long-term temporal modeling, which is crucial for rain streaks and raindrops removal in videos. Unexpectedly, its uni-dimensional sequential process on videos destroys the local correlations across the spatio-temporal dimension by distancing adjacent pixels. To address this, we present an improved SSMs-based video deraining network (RainMamba) with a novel Hilbert scanning mechanism to better capture sequence-level local information. We also introduce a difference-guided dynamic contrastive locality learning strategy to enhance the patch-level self-similarity learning ability of the proposed network. Extensive experiments on four synthesized video deraining datasets and real-world rainy videos demonstrate the superiority of our network in the removal of rain streaks and raindrops.<br />Comment: ACM Multimedia 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.21773
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3664647.3680916