Back to Search Start Over

Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement.

Authors :
Ma, Long
Liu, Risheng
Zhang, Jiaao
Fan, Xin
Luo, Zhongxuan
Source :
IEEE Transactions on Neural Networks & Learning Systems. Oct2022, Vol. 33 Issue 10, p5666-5680. 15p.
Publication Year :
2022

Abstract

Enhancing the quality of low-light (LOL) images plays a very important role in many image processing and multimedia applications. In recent years, a variety of deep learning techniques have been developed to address this challenging task. A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces, causing many unfavorable outcomes, e.g., details loss, color unsaturation, and artifacts. To address these issues, we develop a new context-sensitive decomposition network (CSDNet) architecture to exploit the scene-level contextual dependencies on spatial scales. More concretely, we build a two-stream estimation mechanism including reflectance and illumination estimation network. We design a novel context-sensitive decomposition connection to bridge the two-stream mechanism by incorporating the physical principle. The spatially varying illumination guidance is further constructed for achieving the edge-aware smoothness property of the illumination component. According to different training patterns, we construct CSDNet (paired supervision) and context-sensitive decomposition generative adversarial network (CSDGAN) (unpaired supervision) to fully evaluate our designed architecture. We test our method on seven testing benchmarks [including massachusetts institute of technology (MIT)-Adobe FiveK, LOL, ExDark, and naturalness preserved enhancement (NPE)] to conduct plenty of analytical and evaluated experiments. Thanks to our designed context-sensitive decomposition connection, we successfully realized excellent enhanced results (with sufficient details, vivid colors, and few noises), which fully indicates our superiority against existing state-of-the-art approaches. Finally, considering the practical needs for high efficiency, we develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels. Furthermore, by sharing an encoder for these two components, we obtain a more lightweight version (SLiteCSDNet for short). SLiteCSDNet just contains 0.0301M parameters but achieves the almost same performance as CSDNet. Code is available at https://github.com/KarelZhang/CSDNet-CSDGAN. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
2162237X
Volume :
33
Issue :
10
Database :
Academic Search Index
Journal :
IEEE Transactions on Neural Networks & Learning Systems
Publication Type :
Periodical
Accession number :
160690126
Full Text :
https://doi.org/10.1109/TNNLS.2021.3071245