Back to Search Start Over

A sparse lightweight attention network for image super-resolution.

Authors :
Zhang, Hongao
Fang, Jinsheng
Hu, Siyu
Zeng, Kun
Source :
Visual Computer. Feb2024, Vol. 40 Issue 2, p1261-1272. 12p.
Publication Year :
2024

Abstract

Recently, deep learning methods have been widely applied to single image super-resolution reconstruction tasks and achieved great improvement in both quantitative and qualitative evaluations. However, most of the existing convolutional neural network-based methods commonly reduce the number of layers or channels to obtain lightweight model. These strategies may reduce the representation ability of informative features and degrade the network performance. To address this issue, we propose a sparse lightweight attention network (SLAN), a novel SISR algorithm to keep informative features between layers. Specially, a sparse attention feature fusion module with lightweight attention and sparse extracting modules is developed to expand feature extracting receptive field and enhance the informative feature extracting ability. To take advantage of the multi-level features and keep fewer multi-adds, cross fusion is used and demonstrated usefully. Extensive experimental results on public datasets demonstrate the superior performance of our proposed SLAN. The average PSNR(dB)/SSIM values of SLAN are about 0.04/0.0004, 0.06/0.0009, and 0.09/0.0018 larger than the competitors under scaling factors of × 2, × 3 and × 4, respectively. Our SLAN benefits from its model size and low computation cost and may be deployed on mobile platforms. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01782789
Volume :
40
Issue :
2
Database :
Academic Search Index
Journal :
Visual Computer
Publication Type :
Academic Journal
Accession number :
174971143
Full Text :
https://doi.org/10.1007/s00371-023-02845-7