1. Bilateral fusion low‐light image enhancement with implicit information constraints
- Author
-
Jiahui Zhu, Shengbo Sang, Aoqun Jian, Le Yang, Luxiao Sang, Yang Ge, Rihui Kang, LiuWei Yang, Lei Tao, and RunFang Hao
- Subjects
gradient methods ,image enhancement ,image fusion ,neural net architecture ,Photography ,TR1-1050 ,Computer software ,QA76.75-76.765 - Abstract
Abstract Research on low‐light image enhancement focuses on improving image quality in dim conditions. Recently, deep learning has driven significant advancements, with many studies using neural networks to enhance low‐light images. However, most focus on complex network designs to increase nonlinearity, often neglecting the implicit information from local image transformations. This paper introduces an improved U‐net‐based method for low‐light enhancement, retaining the original encoding network and adding branch links in the decoding network. The method uses attention feature fusion to handle image noise and gradients separately, adjusting brightness through a gradient‐adaptive transform. This approach optimizes performance using loss functions like peak signal‐to‐noise ratio and colour consistency. Unlike previous methods, the approach emphasizes extracting implicit information from image gradients, achieving enhancement that aligns with the original brightness distribution. The result is enhanced images with high detail similarity to the original, achieved through end‐to‐end inference in experiments.
- Published
- 2024
- Full Text
- View/download PDF