Back to Search Start Over

MICU: Image super-resolution via multi-level information compensation and U-net.

Authors :
Chen, Yuantao
Xia, Runlong
Yang, Kai
Zou, Ke
Source :
Expert Systems with Applications. Jul2024, Vol. 245, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

• The proposed method had fused multi-level information compensation and U-Net. • The module designed for multi-level information compensation and compression. • The method had fused compressed features and correlation features. Recently, Deep Convolutional Neural Networks have demonstrated high-quality reconstruction in image super-resolution procedure. In this paper, we propose improved image super-resolution reconstruction via multi-level information compensation and U-Net network to address the problem that the image super-resolution reconstruction algorithm based on deep neural networks tends to lose feature information in the feature extraction process, resulting in the lack of texture and edge details in the reconstructed image. Firstly, we design the U-net like network for image super-resolution reconstruction, which performs multi-level feature extraction and channel compression for the input features through the down-channel branch. It fuses the compressed features and extracts the correlation features of different channels through the bottom module, and performs multi-level feature extraction and channel recovery for the compressed correlation features through the up-channel branch. The multi-level information compensation model is then designed to compensate for the information lost in the channel compression process and the information that is difficult to recover in the channel recovery process of U-net like network. The experimental results can show that the proposed algorithm achieves a significant improvement in Peak Signal-to-Noise Ratio and Structure Similarity Index and visual effect compared with the state-of-arts algorithms. The average experimental results of PSNR from proposed method had improved by 1.63 dB, 1.53 dB, 0.97 dB and 0.94 dB compared to SRCNN, HAT, DAT and CARN, respectively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09574174
Volume :
245
Database :
Academic Search Index
Journal :
Expert Systems with Applications
Publication Type :
Academic Journal
Accession number :
176152007
Full Text :
https://doi.org/10.1016/j.eswa.2023.123111