1. Lightweight super-resolution via multi-group window self-attention and residual blueprint separable convolution.
- Author
-
Liang, Chen, Liang, Hu, Liu, Yuchen, and Zhao, Shengrong
- Abstract
Benefiting from self-attention mechanism and convolutional operation, numerous lightweight Transformer-based single image super-resolution (SISR) approaches have made considerable breakthroughs in performance. Nevertheless, higher computational costs of self-attention and parametric overheads of convolution still limit the deployment of these methods on low-budget devices. To tackle the above issues, based on the sequentially cascaded attention and convolution-cooperative blocks (ACCBs), we propose an attention and convolution-cooperative network (ACCNet) for lightweight image super-resolution (SR). Specifically, in each ACCB, to alleviate the computational burden of self-attention mechanism, we propose a multi-group window self-attention (MGWSA), which achieves self-attention calculation on different groups of features via different window sizes. To implement a lightweight convolutional operation that assists self-attention for local feature extraction, we propose a residual blueprint separable convolution (RBSC), which combines the advantages of efficient convolution and residual learning. Additionally, an enhanced multi-layer perceptron (EMLP) is designed to strengthen the representation of features spatially. The contrast-aware channel attention (CCA) is introduced to exploit the beneficial interdependency among channels. Compared to other lightweight state-of-the-art SR methods, the better trade-off between model complexity and performance in our ACCNet is demonstrated by extensive experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF