Back to Search Start Over

Semantic attention guided low-light image enhancement with multi-scale perception.

Authors :
Hou, Yongqi
Yang, Bo
Source :
Journal of Visual Communication & Image Representation. Aug2024, Vol. 103, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Low-light environments often lead to complex degradation of captured images. However, most deep learning-based image enhancement methods for low-light conditions only learn a single-channel mapping relationship between the input image in low-light conditions and the desired image in normal light without considering semantic priors. This may cause the network to deviate from the original color of the region. In addition, deep network architectures are not suitable for low-light image recovery due to low pixel values. To address these issues, we propose a novel network called SAGNet. It consists of two branches:the main branch extracts global enhancement features at the level of the original image, and the other branch introduces semantic information through region-based feature learning and learns local enhancement features for semantic regions with multi-level perception to maintain color consistency. The extracted features are merged with the global enhancement features for semantic consistency and visualization. We also propose an unsupervised loss function to improve the network's adaptability to general scenes and reduce the effect of sparse datasets. Extensive experiments and ablation studies show that SAGNet maintains color accuracy better in all cases and keeps natural luminance consistency across the semantic range. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10473203
Volume :
103
Database :
Academic Search Index
Journal :
Journal of Visual Communication & Image Representation
Publication Type :
Academic Journal
Accession number :
179420889
Full Text :
https://doi.org/10.1016/j.jvcir.2024.104242