Back to Search
Start Over
Semantic-Aware Guided Low-Light Image Super-Resolution
- Source :
- IEEE Access, Vol 12, Pp 72408-72419 (2024)
- Publication Year :
- 2024
- Publisher :
- IEEE, 2024.
-
Abstract
- The single image super-resolution based on deep learning has achieved extraordinary performance. However, due to inevitable environmental or technological limitations, some images not only have low resolution but also low brightness. The existing super-resolution methods for restoring images through low-light input may encounter issues such as low brightness and many missing details. In this paper, we propose a semantic-aware guided low-light image super-resolution method. Initially, we present a semantic perception guided super-resolution framework that utilizes the rich semantic prior knowledge of the semantic network module. Through the semantic-aware guidance module, reference semantic features and target image features are fused in a quantitative attention manner, guiding low-light image features to maintain semantic consistency during the reconstruction process. Second, we design a self-calibrated light adjustment module to constrain the convergence consistency of each illumination estimation block by self-calibrated block, improving the stability and robustness of output brightness enhancement features. Third, we design a lightweight super resolution module based on spatial and channel reconstruction convolution, which uses the attention module to further enhances the super-resolution reconstruction capability. Our proposed model surpasses methods such as RDN, RCAN, and NLSN in both qualitative and quantitative analysis of low-light image super-resolution reconstruction. The experiment proves the efficiency and effectiveness of our method.
Details
- Language :
- English
- ISSN :
- 21693536
- Volume :
- 12
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Access
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.0031a8d49ee4429fa193ebacd6016625
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/ACCESS.2024.3403096