Back to Search
Start Over
Single‐image super‐resolution using lightweight transformer‐convolutional neural network hybrid model.
- Source :
- IET Image Processing (Wiley-Blackwell); 8/21/2023, Vol. 17 Issue 10, p2881-2893, 13p
- Publication Year :
- 2023
-
Abstract
- With constant advances in deep learning methods as applied to image processing, deep convolutional neural networks (CNNs) have been widely explored in single‐image super‐resolution (SISR) problems and have attained significant success. These CNN‐based methods cannot fully use the internal and external information of the image. The authors add a lightweight Transformer structure to capture this information. Specifically, the authors apply a dense block structure and residual connection to build a residual dense convolution block (RDCB) that reduces the parameters somewhat and extracts shallow features. The lightweight transformer block (LTB) further extracts features and learns the texture details between the patches through the self‐attention mechanism. The LTB comprises an efficient multi‐head transformer (EMT) with small graphics processing unit (GPU) memory footprint, and benefits from feature preprocessing by multi‐head attention (MA), reduction, and expansion. The EMT significantly reduces the use of GPU resources. In addition, a detail‐purifying attention block (DAB) is proposed to explore the context information in the high‐resolution (HR) space to recover more details. Extensive evaluations of four benchmark datasets demonstrate the effectiveness of the authors' proposed model in terms of quantitative metrics and visual effects. The proposed EMT only uses about 40% as much GPU memory as other methods, with better performance. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 17519659
- Volume :
- 17
- Issue :
- 10
- Database :
- Complementary Index
- Journal :
- IET Image Processing (Wiley-Blackwell)
- Publication Type :
- Academic Journal
- Accession number :
- 169771622
- Full Text :
- https://doi.org/10.1049/ipr2.12833