Back to Search Start Over

Drfnet: dual stream recurrent feature sharing network for video dehazing.

Authors :
Galshetwar, Vijay M.
Saini, Poonam
Chaudhary, Sachin
Source :
International Journal of Machine Learning & Cybernetics; Aug2024, Vol. 15 Issue 8, p3397-3412, 16p
Publication Year :
2024

Abstract

The primary effects of haze on captured images/frames are visibility degradation and color disturbance. Even though extensive research has been done on the tasks of video dehazing, they fail to perform better on varicolored hazy videos. The varicolored haze is still a challenging problem in video de-hazing. To tackle the problem of varicolored haze, the contextual information alone is not sufficient. In addition to adequate contextual information, color balancing is required to restore varicolored hazy images/videos. Therefore, this paper proposes a novel lightweight dual stream recurrent feature sharing network (with only 1.77 M parameters) for video de-hazing. The proposed framework involves: (1) A color balancing module to balance the color of input hazy frame in YCbCr space, (2) A multi-receptive multi-resolution module (MMM), which interlinks the RGB and YCbCr based features to learn global and rich contextual data, (3) Further, we have proposed a feature aggregation residual module (FARM) to strengthen the representative capability during reconstruction, (4) A channel attention module is proposed to resist redundant features by recalibrating weights of input features. Experimental results and ablation study show that the proposed model is superior to existing state-of-the-art approaches for video de-hazing. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
18688071
Volume :
15
Issue :
8
Database :
Complementary Index
Journal :
International Journal of Machine Learning & Cybernetics
Publication Type :
Academic Journal
Accession number :
178276506
Full Text :
https://doi.org/10.1007/s13042-024-02099-2