Back to Search Start Over

WaveletFormerNet: A Transformer-based wavelet network for real-world non-homogeneous and dense fog removal.

Authors :
Zhang, Shengli
Tao, Zhiyong
Lin, Sen
Source :
Image & Vision Computing. Jun2024, Vol. 146, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Although deep convolutional neural networks have achieved remarkable success in removing synthetic fog, it is essential to be able to process images taken in complex foggy conditions, such as dense or non-homogeneous fog, in the real world. However, the haze distribution in the real world is complex, and downsampling can lead to color distortion or loss of detail in the output results as the resolution of a feature map or image resolution decreases. Moreover, the over-stacking of convolutional blocks might increase the model complexity. In addition to the challenges of obtaining sufficient training data, overfitting can also arise in deep learning techniques for foggy image processing, which can limit the generalization abilities of the model, posing challenges for its practical applications in real-world scenarios. Considering these issues, this paper proposes a Transformer-based wavelet network (WaveletFormerNet) for real-world foggy image recovery. We embed the discrete wavelet transform into the Vision Transformer by proposing the WaveletFormer and IWaveletFormer blocks, aiming to alleviate texture detail loss and color distortion in the image due to downsampling. We introduce parallel convolution in the Transformer block, which allows for the capture of multi-frequency information in a lightweight mechanism. Such a structure reduces computational expenses and improves the effectiveness of the network. Additionally, we have implemented a feature aggregation module (FAM) to maintain image resolution and enhance the feature extraction capacity of our model, further contributing to its impressive performance in real-world foggy image recovery tasks. Through extensive experiments on real-world fog datasets, we have demonstrated that our WaveletFormerNet achieves superior performance compared to state-of-the-art methods, as shown through quantitative and qualitative evaluations of minor model complexity. Additionally, our satisfactory results on real-world dust removal and application tests showcase the superior generalization ability and improved performance of WaveletFormerNet in computer vision-related applications compared to existing state-of-the-art methods, further confirming our proposed approach's effectiveness and robustness. Our code is available at https://github.com/shengli666666/WaveletFormerNet. [Display omitted] • Combining Vision Transformer and wavelet transform alleviates image detail loss. • Parallel convolution captures frequency information in a lightweight mechanism. • The feature aggregation module enhances the model feature extraction capability. • Experimental results demonstrate WaveletFormerNet's superior dehazing performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02628856
Volume :
146
Database :
Academic Search Index
Journal :
Image & Vision Computing
Publication Type :
Academic Journal
Accession number :
177372766
Full Text :
https://doi.org/10.1016/j.imavis.2024.105014