99 results on '"Low-light image"'
Search Results
2. 跨级特征自适应融合的暗光图像增强算法.
- Author
-
梁礼明, 朱晨锟, 阳 渊, and 李仁杰
- Abstract
Copyright of Chinese Journal of Liquid Crystal & Displays is the property of Chinese Journal of Liquid Crystal & Displays and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
3. EDAFormer: Enhancing Low-Light Images with a Dual-Attention Transformer
- Author
-
Zhang, Jin, Jin, Haiyan, Su, Haonan, Zhang, Yuanlin, Xiao, Zhaolin, Wang, Bin, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wand, Michael, editor, Malinovská, Kristína, editor, Schmidhuber, Jürgen, editor, and Tetko, Igor V., editor
- Published
- 2024
- Full Text
- View/download PDF
4. Efficient and natural image fusion method for low-light images based on active contour model and adaptive gamma correction.
- Author
-
Ozturk, Nurullah and Ozturk, Serkan
- Subjects
IMAGE fusion ,COLOR space ,IMAGE processing ,GAUSSIAN distribution ,IMAGE intensifiers - Abstract
Image fusion-based methods have received much attention in image processing applications in recent years. In this paper, an efficient and natural image fusion method based on the active contour model (ACM) and adaptive gamma correction (AGC) is proposed for the low-light images. The image is segmented into object and background regions quickly and detailed using hybrid ACM based on Chen-Vese and Local Gaussian Distribution Fitting (CV-LGDF), and a fusion mask is obtained. Then, the effective gamma correction parameter is calculated by using the exposure threshold independently for each region. The dynamic pixel range of each region is distributed using the histogram stretching. The color space of each region is converted to the HSI color space, and then the intensity component of each region is enhanced independently with the AGC method. The enhanced regions are merged using the fusion mask, and the color space of the enhanced image is transformed into RGB color space. Finally, histogram equalization is performed on the input image using the histogram map of the fusion image. The performance of the proposed method is compared to that of other state-of-the-art low-light methods. The experiments illustrate that our method provides effective and natural enhancement of the contrast and brightness in the image. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. KinD-LCE: curve estimation and Retinex Fusion on low-light image.
- Author
-
Lei, Xiaochun, Mai, Weiliang, Xie, Junlin, Liu, He, Jiang, Zetao, Gong, Zhaoting, Lu, Chang, and Lu, Linjun
- Abstract
Low-light images often suffer from noise and color distortion. Object detection, semantic segmentation, instance segmentation, and other tasks are challenging when working with low-light images because of image noise and chromatic aberration. We also found that the conventional Retinex theory loses information in adjusting the image for low-light tasks. In response to the aforementioned problem, this paper proposes an algorithm for low illumination enhancement. The proposed method, KinD-LCE, uses a light curve estimation module to enhance the illumination map in the Retinex decomposed image, improving the overall image brightness. An illumination map and reflection map fusion module were also proposed to restore the image details and reduce detail loss. Additionally, a TV(total variation) loss function was applied to eliminate noise. Our method was trained on the GladNet dataset, known for its diverse collection of low-light images, tested against the Low-Light dataset, and evaluated using the ExDark dataset for downstream tasks, demonstrating competitive performance with a PSNR of 19.7216 and SSIM of 0.8213. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Semantic-Aware Guided Low-Light Image Super-Resolution
- Author
-
Sheng Ren, Rui Cao, Wenxue Tan, and Yayuan Tang
- Subjects
Low-light image ,semantic-aware ,super-resolution ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The single image super-resolution based on deep learning has achieved extraordinary performance. However, due to inevitable environmental or technological limitations, some images not only have low resolution but also low brightness. The existing super-resolution methods for restoring images through low-light input may encounter issues such as low brightness and many missing details. In this paper, we propose a semantic-aware guided low-light image super-resolution method. Initially, we present a semantic perception guided super-resolution framework that utilizes the rich semantic prior knowledge of the semantic network module. Through the semantic-aware guidance module, reference semantic features and target image features are fused in a quantitative attention manner, guiding low-light image features to maintain semantic consistency during the reconstruction process. Second, we design a self-calibrated light adjustment module to constrain the convergence consistency of each illumination estimation block by self-calibrated block, improving the stability and robustness of output brightness enhancement features. Third, we design a lightweight super resolution module based on spatial and channel reconstruction convolution, which uses the attention module to further enhances the super-resolution reconstruction capability. Our proposed model surpasses methods such as RDN, RCAN, and NLSN in both qualitative and quantitative analysis of low-light image super-resolution reconstruction. The experiment proves the efficiency and effectiveness of our method.
- Published
- 2024
- Full Text
- View/download PDF
7. Low light image enhancement using reflection model and wavelet fusion
- Author
-
Singh, Pallavi, Bhandari, Ashish Kumar, and Kumar, Reman
- Published
- 2024
- Full Text
- View/download PDF
8. Laplacian and gaussian pyramid based multiscale fusion for nighttime image enhancement
- Author
-
Singh, Pallavi and Bhandari, Ashish Kumar
- Published
- 2024
- Full Text
- View/download PDF
9. Illumination estimation for nature preserving low-light image enhancement.
- Author
-
Singh, Kavinder and Parihar, Anil Singh
- Subjects
- *
IMAGE intensifiers , *NATURE reserves , *LIGHTING , *IMAGE analysis , *FILTERS & filtration - Abstract
In retinex model, images are considered as a combination of two components: illumination and reflectance. However, decomposing an image into the illumination and reflectance is an ill-posed problem. This paper presents a new approach to estimate the illumination for low-light image enhancement. This work contains three major tasks: estimation of structure-aware initial illumination, refinement of the estimated illumination, and the final correction of lightness in refined illumination. We have proposed a novel approach for structure-aware initial illumination estimation leveraging a new multi-scale guided filtering approach. The algorithm refines proposed initial estimation by formulating a new multi-objective function for optimization. Further, we proposed a new adaptive illumination adjustment for correction of lightness using the estimated illumination. The qualitative and quantitative analysis on low-light images with varying illumination shows that the proposed algorithm performs image enhancement with color constancy and preserves the natural details. The performance comparison with state-of-the-art algorithms shows the superiority of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. FOLD: Low-Level Image Enhancement for Low-Light Object Detection Based on FPGA MPSoC.
- Author
-
Li, Xiang, Li, Zeyu, Zhou, Lirong, and Huang, Zhao
- Subjects
OBJECT recognition (Computer vision) ,IMAGE enhancement (Imaging systems) ,IMAGE intensifiers ,COMPUTER vision - Abstract
Object detection has a wide range of applications as the most fundamental and challenging task in computer vision. However, the image quality problems such as low brightness, low contrast, and high noise in low-light scenes cause significant degradation of object detection performance. To address this, this paper focuses on object detection algorithms in low-light scenarios, carries out exploration and research from the aspects of low-light image enhancement and object detection, and proposes low-level image enhancement for low-light object detection based on the FPGA MPSoC method. On the one hand, the low-light dataset is expanded and the YOLOv3 object detection model is trained based on the low-order image enhancement technique, which improves the detection performance of the model in low-light scenarios; on the other hand, the model is deployed on the MPSoC board to achieve an edge object detection system, which improves the detection efficiency. Finally, validation experiments are conducted on the publicly available low-light object detection dataset and the ZU3EG-AXU3EGB MPSoC board, and the results show that the method in this paper can effectively improve the detection accuracy and efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. 조도 및 퍼지함수를 사용한 저조도 영상의 개선.
- Author
-
천봉원 and 김남호
- Subjects
SIGNAL processing ,LUMINOUS flux - Abstract
Images acquired in low-light often have poor visibility, and can considerably degrade an algorithm’s performance when used in a computer or in a multimedia system. To solve these problems, techniques to improve low-light images have been suggested, but in case of the images with enhanced illuminance, the source of light can be saturated in some local areas or colors can be distorted. In this paper, a technique to improve low-light images using illuminance and fuzzy function was proposed. In the proposed method, fuzzy membership function was set depending on the illuminance value of a low-light image in order to divide the low-light images. And then, each illuminance environment was estimated in the divided images, and the images with improved illuminance were found. For the evaluation of the performance of the proposed algorithm, it was simulated with the existing low-light image improvement methods. Its results were compared using expanded images, and the quantitative evaluation was analyzed through PSNR and SSIM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Very Low Illumination Image Enhancement via Lightness Mapping
- Author
-
Hashim, Ahmed Rafid, Kareem, Hana H., Daway, Hazim G., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Saraswat, Mukesh, editor, Chowdhury, Chandreyee, editor, Kumar Mandal, Chintan, editor, and Gandomi, Amir H., editor
- Published
- 2023
- Full Text
- View/download PDF
13. Enhancement of Low-Light Images Using Illumination Estimate and Local Steering Kernel.
- Author
-
Cheon, Bong-Won and Kim, Nam-Ho
- Subjects
IMAGE intensifiers ,IMAGE enhancement (Imaging systems) ,MULTIMEDIA systems ,COMPUTER vision ,LIGHTING ,IMAGE processing - Abstract
Images acquired in low-light conditions often have poor visibility. These images considerably degrade the performance of algorithms when used in computer vision and multi-media systems. Several methods for low-light image enhancement have been proposed to address these issues; furthermore, various techniques have been used to restore close-to-normal light conditions or improve visibility. However, there are problems with the enhanced image, such as saturation of local light sources, color distortion, and amplified noise. In this study, we propose a low-light image enhancement technique using illumination component estimation and a local steering kernel to address this problem. The proposed method estimates the illumination components in low-light images and obtains the images with illumination enhancement based on Retinex theory. The resulting image is then color-corrected and denoised using a local steering kernel. To evaluate the performance of the proposed method, low-light images taken under various conditions are simulated using the proposed method, and it demonstrates visual and quantitative superiority to the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Low-Light Image Contrast Enhancement with Adaptive Noise Attenuator for Augmented Vehicle Detection.
- Author
-
Yoon, Sungan and Cho, Jeongho
- Subjects
IMAGE intensifiers ,OBJECT recognition (Computer vision) ,QUANTUM noise ,OPTICAL reflection ,NOISE ,REFLECTANCE ,DEEP learning - Abstract
The rapid progress in deep learning technologies has accelerated the use of object detection models, but most models do not operate satisfactorily in low-light environments. As a result, many studies have been conducted on image enhancement techniques aiming to make objects more visible by increasing contrast, but the process of image enhancement may negatively impact detection as it further strengthens unwanted noises due to indirect factors of light reflection such as overall low brightness, streetlamps, and neon signboards. Therefore, in this study, we propose a technique for improving the performance of object detection in low-light environments. The proposed technique inverts a low-light image to make it similar to a hazy image and then uses a haze removal algorithm based on entropy and fidelity to increase image contrast, clarifying the boundary between the object and the background. In the next step, we used the adaptive 2D Wiener filter (A2WF) to attenuate the noise accidentally strengthened during the image enhancement process and reinforced the boundary between the object and the background to increase detection performance. The test evaluation results showed that the proposed image enhancement scheme significantly increased image perception performance with the perception-based image quality evaluator being 12.73% lower than existing image enhancement techniques. In a comparison of vehicle detection performance, the proposed technique for enhancing nighttime images combined with the detection model proved its effectiveness by increasing the average precision by up to 18.63% against existing detection methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. RME: a low-light image enhancement model based on reflectance map enhancing.
- Author
-
Fan, Zirui, Tang, Chen, Shen, Yuxin, Xu, Min, and Lei, Zhenkun
- Abstract
Low-light images often suffer from quality degradation, such as low contrast, poor visibility, and latent noise in the dark. We propose a straightforward and efficient Retinex-based method to enhance the low-light image that is named RME. The algorithm differs from the existing algorithms. RME is directly improving the reflectance map to enhance images. As we know, the reflectance map contains many colors and texture detail information. Therefore, we consider the direct enhancement of the reflectance map yields better visual quality for the low-light image. Extensive experiments show that the RME algorithm gets better results of the low-light image enhancement than the popular methods. We show the advantages of the RME algorithm in enhancing low-light images through qualitative and quantitative analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Optimization algorithm for low‐light image enhancement based on Retinex theory
- Author
-
Jie Yang, Jun Wang, LinLu Dong, ShuYuan Chen, Hao Wu, and YaWen Zhong
- Subjects
fast and robust fuzzy C‐means ,guided filtering ,image enhancement ,low‐light image ,Photography ,TR1-1050 ,Computer software ,QA76.75-76.765 - Abstract
Abstract To improve the visual quality of low‐light images and discover hidden details in images, an image enhancement algorithm is proposed, which is based on a fast and robust fuzzy C‐means (FRFCM) clustering algorithm combined with Retinex theory. The algorithm is based on Retinex theory to solve the above problems as followings: Firstly, the initial illumination estimation image is constructed by max‐RGB and segmented by FRFCM algorithm. Secondly, initial illumination estimation image and its segmented image linearly fused with a certain proportion is to obtain the optimized illumination estimation image, then is smoothed by guided filtering. Finally, reflected image is obtained by Retinex theory and the edge details of the image by equal ratio are enhanced, showing the enhanced image rich in detail texture. In order to verify the proposed algorithm, a large number of low‐light image datasets are applied to test the proposed algorithm. And the effects of image enhancement of the algorithm and other existing enhanced algorithms are also compared. The experimental results show that the proposed algorithm performs well in both subjective and objective evaluation, especially the good ability to keep meticulous of details and colour retention.
- Published
- 2023
- Full Text
- View/download PDF
17. Adaptive Single Low-Light Image Enhancement by Fractional Stretching in Logarithmic Domain
- Author
-
Thaweesak Trongtirakul, Sos S. Agaian, and Shiqian Wu
- Subjects
Retinex ,image enhancement ,logarithmic transformation ,low-light image ,fractional stretching functions ,non-uniform illumination ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Low-light image enhancement is a challenging task that aims to improve the visibility and quality of images captured in dark environments. However, existing methods often introduce undesirable artifacts such as color distortion, halo effects, blocking artifacts, and noise amplification. In this paper, we propose a novel method that overcomes these limitations by using the logarithmic domain fractional stretching approach to estimate the reflectance component of the image based on the improved Retinex theory. Moreover, we apply a simple adaptive gamma correction algorithm to the Lab color-space to adjust the brightness and saturation of the image. Our method effectively reduces the impact of non-uniform illumination and produces enhanced images with natural and realistic colors. Extensive experiments across diverse public datasets substantiate the superiority of our method. In both subjective and objective evaluations, our approach outperforms state-of-the-art methods.
- Published
- 2023
- Full Text
- View/download PDF
18. Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression
- Author
-
Jin, Yeying, Yang, Wenhan, Tan, Robby T., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
19. Color Image Enhancement Algorithm on the Basis of Wavelet Transform and Retinex Theory
- Author
-
Xia, Xinzhe, Yang, Jie, Li, Jinpeng, Zhen, Jiaqi, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Wang, Wei, editor, Liu, Xin, editor, Na, Zhenyu, editor, and Zhang, Baoju, editor
- Published
- 2022
- Full Text
- View/download PDF
20. LESN: Low-Light Image Enhancement via Siamese Network
- Author
-
Nie, Xixi, Song, Zilong, Zhou, Bing, Wei, Yating, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Yao, Jian, editor, Xiao, Yang, editor, You, Peng, editor, and Sun, Guang, editor
- Published
- 2022
- Full Text
- View/download PDF
21. Low-light image enhancement based on improved Retinex-Net
- Author
-
WANG Yannian, YANG Hengsheng, LIU Yanyan, and YANG Tao
- Subjects
low-light image ,image enhancement ,retinex-net ,residual network ,image fusion ,Materials of engineering and construction. Mechanics of materials ,TA401-492 ,Environmental engineering ,TA170-171 - Abstract
In order to solve the problem of high noise and insufficient feature extraction of Retinex-Net in low-light image enhancement processing, this paper proposes a new network structure. First, the Retinex-Net network was used as the basic model to decompose the input image, and a residual shrinkage network was introduced in the convolutional layer to remove the noise generated during the decomposition process. Then, in order to preserve the details of the image and suppress noise while enhancing the brightness, the enhancement network was divided into three small sub-networks for processing respectively. Finally, the adjusted images were fused to obtain an enhanced image. Compared with the SIRE、LIME、GLADNet、Retinex-Net algorithm, experiments show that the peak signal-to-noise ratio of the images processed by the algorithm in this paper has an avevage increase of 3.48 dB, the mean square error has an avevage increase of 0.082 7, the structural similarity has an avevage increase of 0.146, and the lightness order error has an avevage increase of 271.6.
- Published
- 2022
- Full Text
- View/download PDF
22. Low‐light image enhancement via span correction function and discrete mapping model.
- Author
-
He, Lei, Liu, Shouxin, Long, Wei, and Li, Yanyan
- Subjects
- *
IMAGE intensifiers , *COGNITIVE processing speed , *PIXELS , *TEST methods - Abstract
This paper proposes a new low‐light image enhancement method, which we call the Local Discrete Mapping Method. The new method limits the processing range to small areas with a high information relevance, which can better coordinate the enhancement quality of each area. First, the discrete mapping relationship of pixels (called discrete mapping points) globally occupying a small part of the critical gray value was extracted and designated to keep the enhancement amplitude of each local area consistent. Then, other free mapping points were adjusted according to the local features to achieve the best visual effect in each local area. In addition, this paper also proposes a span correction function that takes the gray span between local pixels as the adjustment object. The function can preserve the gray difference between freely mapped pixels to the maximum and significantly reduce detailed damage in the local area. Finally, we used 1500 test images and eleven objective evaluation indicators in the public dataset to comprehensively test the seven methods. The experimental results showed that the proposed method has an excellent dark area quality enhancement, brightness detail protection, overall noise suppression, and processing speed. It is significantly better than similar methods in terms of visual quality and quantitative testing. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Learning deep texture-structure decomposition for low-light image restoration and enhancement.
- Author
-
Zhao, Lijun, Wang, Ke, Zhang, Jinjing, Wang, Anhong, and Bai, Huihui
- Subjects
- *
IMAGE reconstruction , *IMAGE intensifiers , *IMAGE enhancement (Imaging systems) , *DEEP learning , *BLOCK designs - Abstract
A great many low-light image restoration methods have built their models according to Retinex theory. However, most of these methods cannot well achieve image detail enhancement. To achieve simultaneous restoration and enhancement, we study deep low-light image enhancement from a perspective of texture-structure decomposition, that is, learning image smoothing operator. Specifically, we design a low-light restoration and enhancement framework, in which a Deep Texture-Structure Decomposition (DTSD) network is introduced to estimate two complementary constituents: Fine-Texture (FT) and Prominent-Structure (PS) maps from low-light image. Since these two maps are leveraged to approximate FT and PS maps obtained from normal-light image, they can be combined as the restored image in a manner of pixel-wise addition. The DTSD network has three parts: U-attention block, Decomposition-Merger (DM) block, and Upsampling-Reconstruction (UR) block. To better explore multi-level informative features at different scales than U-Net, U-attention block is designed with intra group and inter group attentions. In the DM block, we extract high-frequency and low-frequency features in low-resolution space. After obtaining informative feature maps from these two blocks, these maps are fed into the UR block for the final prediction. Numerous experimental results have demonstrated that the proposed method can achieve simultaneous low-light image restoration and enhancement, and it has superior performance against many state-of-the-art approaches in terms of several objective and perceptual metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A new grey mapping function and its adaptive algorithm for low-light image enhancement.
- Author
-
He, Lei, Long, Wei, Liu, Shouxin, Li, Yanyan, and Ding, Wei
- Subjects
IMAGE enhancement (Imaging systems) ,IMAGE intensifiers ,COMPUTER vision ,COMPUTER performance ,GAMMA functions ,COMPUTER systems ,MATHEMATICAL mappings - Abstract
When taking images in low light conditions, images often suffer from low visibility. In addition to affecting the sensory quality of images, this poor quality may also significantly limit the performance of various computer vision systems. Many grey-level mapping enhancement algorithms based on classic mapping functions, such as the gamma mapping function, have been proposed in recent years to improve the visual quality of low-light images. However, the classic mapping function cannot coordinate the greyscale distribution of the bright and dark areas of the image well and may easily lead to excessive enhancement. This makes it difficult for the performance of these improved algorithms to be fully utilized. Therefore, this paper proposes a new multiparameter grey mapping method. Unlike the classic mapping function, the new mapping method is based on the enhancement strategy of compressing the bright area and then adjusting the dark area. Thus, the inherent shortcomings of the classic mapping function are fundamentally overcome. The new mapping method can not only directly control the compression of the grey space in the bright area of the image through parameters, but it can also adjust the greyscale distribution of dark areas without changing the greyscale value of the pixels in the bright area. Finally, this paper also designs an adaptive enhancement algorithm with the new mapping method as the core to verify its effectiveness and flexibility. Experimental results showed that the adaptive algorithm had excellent performance in colour rendering, brightness enhancement and noise suppression. It was also obviously better than the current similar algorithms in visual quality and quantitative tests. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. BrightFormer: A transformer to brighten the image.
- Author
-
Wang, Yong, Li, Bo, and Yuan, Xinlin
- Subjects
- *
IMAGE intensifiers , *PIXELS - Abstract
Low-light image enhancement algorithms need recover overall information of images, including local details and global information. However, existing image enhancement methods mainly focus on local details or global information. Therefore, it is challenging to balance the two aspects at the same time. This paper proposes a local dual-branch network (BrightFormer) for image enhancement that combines convolutions and transformers as to solution. The salient features of this paper are: (1) convolution is adopted to refine high-frequency information so that local features are preserved and propagated throughout the network; (2) combining gated parameters with prior information on illumination (ill-map) in self-attention can not only improves the flexibility of feature expression but also extract global features more easily; (3) the obtained local details and global features are fused by spatial and channel attention in Feature equalization fusion unit (FEFU); (4) a Deep feedforward network (DFN) is utilized to encode the location information between adjacent pixels, and the GELU activation function is used to retain useful features and eliminate useless features with an attention-like mechanism. Experimental results show that BrightFormer achieves competitive performance on quantitative metrics and visual perception on the datasets such as LOL, MEF and LIME etc. [Display omitted] • Using cross-convolution can extract rich local features of the image. • Incorporates the gating mechanism and prior information on the illumination. • Different attention mechanisms are adopted for local and global features. • Use the GELU activation function as an attention mechanism. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Optimization algorithm for low‐light image enhancement based on Retinex theory.
- Author
-
Yang, Jie, Wang, Jun, Dong, LinLu, Chen, ShuYuan, Wu, Hao, and Zhong, YaWen
- Subjects
- *
IMAGE enhancement (Imaging systems) , *IMAGE intensifiers , *MATHEMATICAL optimization , *PROBLEM solving - Abstract
To improve the visual quality of low‐light images and discover hidden details in images, an image enhancement algorithm is proposed, which is based on a fast and robust fuzzy C‐means (FRFCM) clustering algorithm combined with Retinex theory. The algorithm is based on Retinex theory to solve the above problems as followings: Firstly, the initial illumination estimation image is constructed by max‐RGB and segmented by FRFCM algorithm. Secondly, initial illumination estimation image and its segmented image linearly fused with a certain proportion is to obtain the optimized illumination estimation image, then is smoothed by guided filtering. Finally, reflected image is obtained by Retinex theory and the edge details of the image by equal ratio are enhanced, showing the enhanced image rich in detail texture. In order to verify the proposed algorithm, a large number of low‐light image datasets are applied to test the proposed algorithm. And the effects of image enhancement of the algorithm and other existing enhanced algorithms are also compared. The experimental results show that the proposed algorithm performs well in both subjective and objective evaluation, especially the good ability to keep meticulous of details and colour retention. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. RESTORATION OF LOW-LIGHT IMAGE BASED ON DEEP RESIDUAL NETWORKS.
- Author
-
Song Jun Ri, Hyon Su Choe, Chung Hyok O, and Jang Su Kim
- Subjects
COMPUTER vision ,IMAGE reconstruction - Abstract
Images captured in low-light conditions usually suffer from very low contrast, which increases the difficulty of computer vision tasks in a great extent. Existing low-light image restoration methods still have limitation in image naturalness and noise. In this paper, we propose an efficient deep residual network that learns difference map between lowlight image and original image and restores the low-light image. Additionally, we propose a new low-light image generator, which is used to train the deep residual network. Especially the proposed generator can simulate low-light images containing luminance sources and completely darkness parts. Our experiments demonstrate that the proposed method achieves good results for both synthetic and natural low-light images. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. 基于 RAW 域的低光照图像质量增强方法.
- Author
-
陈龙, 张一驰, 吕张凯, and 丁丹丹
- Abstract
Copyright of Journal of Computer-Aided Design & Computer Graphics / Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao is the property of Gai Kan Bian Wei Hui and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
29. 一种自我正则映射的弱光图像增强方法.
- Author
-
张华成, 刘朝倩, and 胡建斌
- Abstract
Most of the existing learning-based low-light image enhancement methods are training models with paired data. Although they have achieved good results, they lose their original advantage in the absence of paired training sets. Moreover, in practice, it is almost impossible to obtain two images with different brightness from the same perspective in the same scene, so the model cannot be completely trained with paired data. Therefore, in order to avoid using paired data and make the model domain more adaptive, based on the Generative Adversarial Network, an unsupervised self-regularized attention mapping method for low-light image enhancement is proposed, which is called SAMGAN. This method not only uses the illumination information of the original image as self-regular mapping and grayscale image to enhance it, but also uses the feature self-feature preserving loss to retain the features and content of the original image. Not only can it be trained in the absence of low/ normal-light image pairs, but it can also be well extended to various real-world test images. A large number of experiments have proved that the proposed method is superior to many current methods in terms of visual quality and subjective user study. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Robust U‐Net: Development of robust image enhancement model using modified U‐Net architecture.
- Author
-
Bhavani, Murapaka Dhanalakshmi, Murugan, Raman, and Goel, Tripti
- Subjects
IMAGE intensifiers ,DISCRETE wavelet transforms ,DRIVER assistance systems ,REMOTE sensing ,COMPUTER systems ,FOG ,COMPUTER vision ,HOUGH transforms - Abstract
Summary: The image dehazing stage is used significantly as a preprocessing step for various applications such as remote sensing and long range imaging and automatic driver assistance system. Images acquired under low illumination, fog and snow conditions frequently show qualities like low contrast and low brightness, which genuinely influence the enhanced abstract visualization on natural eyes and extraordinarily limit the exhibition of different machine vision frameworks. The images that are captured in low‐light or heavy fog might have salient features that cannot be extracted using standard computer vision systems. A good way to get the enhanced image is to determine the transmission map (haze density or low illumination parameters) of air‐light media from each pixel of the input image. In this article, an improved U‐Net architecture is proposed to enhance images and provide robust performance metrics against the existing methods. In this model, the pooling operations in generalized U‐Net architecture are replaced by discrete wavelet transform based on up and down samplings. An attention module is developed by fusing both up and down samples to identify the missing information of low‐level features in up‐samples. The proposed architecture for U‐Net tested with different datasets: See‐in‐the‐Dark (SID) dataset, Exclusively Dark Image Dataset (ExDark), Realistic Single Image Dehazing (RESIDE) dataset, and few real‐time images and achieves superior performance metrics in terms of PSNR, MSE, and SSI when compared to the other state‐of‐art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Collaborative brightening and amplification of low-light imagery via bi-level adversarial learning.
- Author
-
Gao, Jiaxin, Liu, Yaohua, Yue, Ziyu, Fan, Xin, and Liu, Risheng
- Subjects
- *
IMPLICIT learning , *VISUAL perception , *LEARNING strategies , *VISUAL training , *IMAGE processing , *VIRTUAL networks - Abstract
Poor light conditions constrain the high pursuit of clarity and visible quality of photography especially smartphone devices. Admittedly, existing specific image processing methods, whether super-resolution methods or low-light enhancement methods, can hardly simultaneously enhance the resolution and brightness of low-light images at the same time. This paper dedicates a specialized enhancer with a dual-path modulated-interactive structure to recover high-quality sharp images in conditions of near absence of light, dubbed CollaBA , which learns the direct mapping from low-resolution dark-light images to their high-resolution normal sharp version. Specifically, we construct the generative modulation prior, serving as illuminance attention information, to regulate the exposure level of the neighborhood range. In addition, we construct an interactive degradation removal branch that progressively embeds the generated intrinsic prior to recover high-frequency detail and contrast at the feature level. We also introduce a multi-substrate up-scaler to integrate multi-scale sampling features, effectively addressing artifact-related problems. Rather than adopting the naive time-consuming learning strategy, we design a novel bi-level implicit adversarial learning mechanism as our fast training strategy. Extensive experiments on benchmark datasets — demonstrate our model's wide-ranging applicability in various ultra-low-light scenarios, across 8 key performance metrics with significant improvements, notably achieving a 35.8% improvement in LPIPS and a 23.1% increase in RMSE. The code will be available at https://github.com/moriyaya/CollaBA. • We present CollaBA, a specialized dual-path modulated-aggregated enhancer, as a fresh approach to tackling the intricate challenge of collaborating amplification and brightening images taken in extremely low-light conditions. • Our CollaBA gains remarkable performance by imposing generative modulation priors to guide exposure regulation, progressively integrating them into the multi-scale degradation removal branch through spatial feature transformation. • Instead of naive time-consuming adversarial learning strategy, a novel bi-level implicit adversarial learning mechanism is designed, effectively improving the stability of training and the quality of visual perception. • Extensive experiments were conducted to thoroughly validate that our method surpasses existing state-of-the-art approaches on real-world benchmark datasets, particularly in extremely low-light conditions, achieving a 35.8% improvement in LPIPS and a 23.1% increase in RMSE. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Enhancement of Low-Light Images Using Illumination Estimate and Local Steering Kernel
- Author
-
Bong-Won Cheon and Nam-Ho Kim
- Subjects
low-light image ,enhancement ,Retinex ,steering kernel ,image processing ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Images acquired in low-light conditions often have poor visibility. These images considerably degrade the performance of algorithms when used in computer vision and multi-media systems. Several methods for low-light image enhancement have been proposed to address these issues; furthermore, various techniques have been used to restore close-to-normal light conditions or improve visibility. However, there are problems with the enhanced image, such as saturation of local light sources, color distortion, and amplified noise. In this study, we propose a low-light image enhancement technique using illumination component estimation and a local steering kernel to address this problem. The proposed method estimates the illumination components in low-light images and obtains the images with illumination enhancement based on Retinex theory. The resulting image is then color-corrected and denoised using a local steering kernel. To evaluate the performance of the proposed method, low-light images taken under various conditions are simulated using the proposed method, and it demonstrates visual and quantitative superiority to the existing methods.
- Published
- 2023
- Full Text
- View/download PDF
33. Low-Light Image Enhancement Using Deep Convolutional Network
- Author
-
Priyadarshini, R., Bharani, Arvind, Rahimankhan, E., Rajendran, N., Xhafa, Fatos, Series Editor, Raj, Jennifer S., editor, Iliyasu, Abdullah M., editor, Bestak, Robert, editor, and Baig, Zubair A., editor
- Published
- 2021
- Full Text
- View/download PDF
34. Low-light Image Enhancement Model with Low Rank Approximation
- Author
-
WANG Yi-han, HAO Shi-jie, HAN Xu, HONG Ri-chang
- Subjects
low-light image ,retinex model ,low rank matrix approximation ,fusion ,Computer software ,QA76.75-76.765 ,Technology (General) ,T1-995 - Abstract
Due to the influence of low lightness,the images acquired at dim or backlight conditions tend to have poor visual quality.Retinex-based low-light enhancement models are effective in improving the scene lightness,but they are often limited in hand-ling the over-boosted image noise hidden in dark regions.To solve this issue,we propose a Retinex-based low-light enhancement model incorporating the low-rank matrix approximation.First,the input image is decomposed into an illumination layer I and a reflectance layer R according to the Retinex assumption.During this process,the image noise in R is suppressed via low-rank-based approximation.Then,aiming to preserve the image details in the bright regions and suppress the noise in the dark regions simultaneously,a post-fusion under the guidance of I is introduced.In experiments,qualitative and quantitative comparisons with other low-light enhancement models demonstrate the effectiveness of our method.
- Published
- 2022
- Full Text
- View/download PDF
35. 结合自适应Gamma变换和MSRCR 算法的低光照图像增强方法.
- Author
-
云海姣 and 夏洋
- Subjects
IMAGE enhancement (Imaging systems) ,SIMULATED annealing ,IMAGE intensifiers ,ENTROPY (Information theory) ,COLOR space ,ALGORITHMS - Abstract
Copyright of China Sciencepaper is the property of China Sciencepaper and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
36. A Structure Preservation and Denoising Low-Light Enhancement Model via Coefficient of Variation.
- Author
-
Wu, Xingtai, Wu, Bin, He, Jingyuan, Fang, Bin, Shang, Zhaowei, and Zhou, Mingliang
- Subjects
- *
SOURCE code , *REFLECTANCE , *IMAGE intensifiers - Abstract
In this paper, we propose a structure-preserving and denoising low-light enhancement method that uses the coefficient of variation. First, we use the coefficient of variation to process the original low-light image, which is used to obtain the enhanced illumination gradient reference map. Second, we use the total variation (TV) norm to regularize the reflectance gradient, which is used to maintain the smoothness of the image and eliminate the artifacts in the reflectance estimation. Finally, we combine the above two constraint terms with the Retinex theory, which contains the denoising regular term. The final enhanced and denoised low-light image is obtained by iterative solution. Experimental results show that our method can achieve superior performance in both subjective and objective assessments compared with other state-of-the-art methods (the source code is available at: https://github.com/bbxavi/SPDLEM.). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Unsupervised Decomposition and Correction Network for Low-Light Image Enhancement.
- Author
-
Jiang, Qiuping, Mao, Yudong, Cong, Runmin, Ren, Wenqi, Huang, Chao, and Shao, Feng
- Abstract
Vision-based intelligent driving assistance systems and transportation systems can be improved by enhancing the visibility of the scenes captured in extremely challenging conditions. In particular, many low-image image enhancement (LIE) algorithms have been proposed to facilitate such applications in low-light conditions. While deep learning-based methods have achieved substantial success in this field, most of them require paired training data, which is difficult to be collected. This paper advocates a novel Unsupervised Decomposition and Correction Network (UDCN) for LIE without depending on paired data for training. Inspired by the Retinex model, our method first decomposes images into illumination and reflectance components with an image decomposition network (IDN). Then, the decomposed illumination is processed by an illumination correction network (ICN) and fused with the reflectance to generate a primary enhanced result. In contrast with fully supervised learning approaches, UDCN is an unsupervised one which is trained only with low-light images and corresponding histogram equalized (HE) counterparts (can be derived from the low-light image itself) as input. Both the decomposition and correction networks are optimized under the guidance of hybrid no-reference quality-aware losses and inter-consistency constraints between the low-light image and its HE counterpart. In addition, we also utilize an unsupervised noise removal network (NRN) to remove the noise previously hidden in the darkness for further improving the primary result. Qualitative and quantitative comparison results are reported to demonstrate the efficacy of UDCN and its superiority over several representative alternatives in the literature. The results and code will be made public available at https://github.com/myd945/UDCN. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Variational Low-Light Image Enhancement Based on Fractional-Order Differential
- Author
-
Ma, Qianting, Wang, Yang, Zeng, Tieyong, Ma, Qianting, Wang, Yang, and Zeng, Tieyong
- Abstract
Images captured under insufficient light conditions often suffer from noticeable degradation of visibility, brightness and contrast. Existing methods pose limitations on enhancing low-visibility images, especially for diverse low-light conditions. In this paper, we first propose a new variational model for estimating the illumination map based on fractional-order differential. Once the illumination map is obtained, we directly inject the well-constructed illumination map into a general image restoration model, whose regularization terms can be viewed as an adaptive mapping. Since the regularization term in the restoration part can be arbitrary, one can model the regularization term by using different off-the-shelf denoisers and do not need to explicitly design various priors on the reflectance component. Because of flexibility of the model, the desired enhanced results can be solved efficiently by techniques like the plug-and-play inspired algorithm. Numerical experiments based on three public datasets demonstrate that our proposed method outperforms other competing methods, including deep learning approaches, under three commonly used metrics in terms of visual quality and image quality assessment.
- Published
- 2024
39. SPLIE: Optimal Illumination Estimation for Structure Preserving Low-light Image Enhancement
- Author
-
Ghada Sandoub, Randa Atta, Rabab Abdel-Kader, and Hesham Ali
- Subjects
image enhancement ,low-light image ,illumination estimation ,optimization ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
The images taken in low-light conditions often have many flaws such as, color vividness and low visibility which negatively affects the performance of many vision-based systems. Many of the existing Retinex-based enhancement algorithms improve the visibility of low-light images via estimating the illumination map and use it to obtain the corresponding reflectance. However, the improper estimation of the initial illumination map may produce unsatisfactory illuminated enhanced images with weak color constancy. To address this problem, this paper proposes an efficient algorithm for the enhancement of low-light images. In this algorithm, the initial illumination map is obtained by the fusion between the maximum color channel and bright channel prior. The estimated initial illumination map is then refined using a multi-objective problem that contains the illumination regularization terms specifically, the structural and textural details of the illumination. The optimization problem is solved using the alternative direction minimization (ADM) technique with the augmented Lagrangian multiplier to produce structure-aware smoothness of the initial illumination map. Finally, the contrast of the refined illumination map is adjusted using the gamma correction method. Experimental results on several benchmark datasets reveal the superiority of the proposed algorithm on the state-of-the-art algorithms in terms of qualitative and quantitative analysis. Furthermore, the proposed algorithm produces enhanced images with reducing the artifacts and preserving the naturalness and structural details.
- Published
- 2021
- Full Text
- View/download PDF
40. Low-light image enhancement based on membership function and gamma correction.
- Author
-
Liu, Shouxin, Long, Wei, Li, Yanyan, and Cheng, Hong
- Subjects
IMAGE intensifiers ,IMAGE enhancement (Imaging systems) ,CHARACTERISTIC functions ,COMPUTATIONAL complexity ,COLOR in design ,GAMMA functions ,MEMBERSHIP functions (Fuzzy logic) - Abstract
The aim of low-light image enhancement algorithms is to improve the luminance of images. However, existing low-light image enhancement algorithms inevitably cause an enhanced image to be over- or underenhanced and cause color distortion, both of which prevent the enhanced images from obtaining satisfactory visual effects. In this paper, we proposed a simple but effective low-light image enhancement algorithm based on a membership function and gamma correction (MFGC). First, we convert the image from the RGB (red, green, blue) color space to the HSV (hue, saturation, value) color space and design a method to achieve the self-adaptation computation of traditional membership function parameters. Then, we use the results of the membership function as the γ value and adjust coefficient c of the gamma function based on the characteristics of different images with different gray levels. Finally, we design a linear function to avoid underenhancement. The experimental results show that our method not only has lower computational complexity but also greatly improves the brightness of low-light areas and addresses uneven brightness. The images enhanced using the proposed method have better objective and subjective image quality evaluation results than other state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Low-Light Image Enhancement Using Multi-Branch Structure and U-net.
- Author
-
WEI Yixue, ZHOU Dongming, WANG Changcheng, and LI Miao
- Subjects
IMAGE intensifiers ,IMAGE enhancement (Imaging systems) ,VISUAL fields ,COMPUTER vision ,LIGHT intensity ,LUMINOUS flux - Abstract
Wth the improvement of night scene shooting technology, low-light image enhancement has become a new hot spot in the field of computer vision. However, due to the lack of light, backlighting, focusing failure and other factors will lead to insufficient light intensity, resulting in excessively low brightness and contrast of the image. To better process lowlight images, a low-light image enhancement algorithm based on multi-branch structure and U-net is proposed. The image features with different levels which are extracted by deep residual network are used for cross-merging. The obtained images are enhanced by U-net with different depths and structures, and then the images enhanced by U-net are fused. The enhanced low illuminance image is obtained. A mass of experiments show that the use of the deep residual network and U-net can better extract features, and the effect of low-intensity image enhancement is better, which is largely superior to the existing technology. The proposed methods not only improve the brightness and contrast visually, the colors are more realistic, and more in line with the characteristics of the human visual system, but also PSNR, SSIM and other seven objective image quality indexes are optimal in several algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Automatic assessment of concrete cracks in low-light, overexposed, and blurred images restored using a generative AI approach.
- Author
-
Guo, Pengwei, Meng, Xiangjun, Meng, Weina, and Bao, Yi
- Subjects
- *
GENERATIVE artificial intelligence , *GENERATIVE adversarial networks , *COMPUTER vision , *IMAGE reconstruction , *CRACKING of concrete , *DEEP learning - Abstract
Deep learning-based computer vision techniques have high efficiency in assessing concrete cracks from images, and the assessment can be automated using robots for higher efficiency. However, assessment accuracy is often compromised by low-quality images. This paper presents a Conditional Generative Adversarial Network (CGAN)-based approach to restore low-light, overexposed, and blurred images. The approach integrates attention mechanisms and residual learning and uses Wasserstein loss with gradient penalty. Crack assessment results show that the proposed approach outperforms state-of-the-art methods, regarding structural similarity (SSIM: 0.78 for deblurring, 0.95 for low-light enhancement, and 0.96 for overexposure correction) and peak signal-to-noise ratio (PSNR: 28.6 for deblurring, 31.4 for low-light enhancement, and 31.6 for overexposure correction). Restored images have been used to train a deep learning model for assessing concrete cracks. The Intersection over Union (IoU) and F1 score of crack segmentation are higher than 0.98 and 0.99, respectively, revealing high accuracy in crack assessment tasks. • A generative artificial intelligence (AI) approach is presented to restore low-quality images. • The presented approach can identify and restore low-light, overexposed, and blur images. • The restored images are utilized to train and test deep learning models for crack assessment. • The utilization of restored images improves the accuracy of crack assessment tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. A saturation-light enhancement method for low-light image via atmospheric scattering model.
- Author
-
Wang, Yu, Li, Jinyu, Zhang, Chuncheng, Wang, Yihong, Sui, Xiubao, and Chen, Qian
- Subjects
- *
ATMOSPHERIC models , *IMAGE intensifiers , *HAZE , *ALGORITHMS , *NOISE - Abstract
Existing low-light image enhancement methods mainly focus on light improvement and noise suppression, ignoring saturation enhancement. A high saturation enhancement ability can broaden the application scenarios of low-light image enhancement algorithms. In this paper, we directly consider low-light image as haze image with extremely low concentration, and propose a saturation-light enhancement method (LSLE) using the atmospheric scattering model to enhance both light and saturation. Specifically, we first consider the airlight in low-light condition to be weak, in which rgb channels are regarded as equal, and construct a mathematical expression between the maximum saturation enhancement capacity and airlight, which can flexibly control the strength of airlight by constraint factor. Then, we estimate maximum saturation enhancement capacity by calculating the difference between pixels saturation and maximum saturation of same color clusters in the input image, and further construct a saturation difference formula. Next, we eliminate the correlation between medium transmission map and airlight in the low-light condition and estimate the initial medium transmission map using maximum brightness prior. Finally, using the estimated airlight, transmission map and atmospheric scattering model, we can directly obtain enhanced results, which improve saturation, light and contrast simultaneously. Extensive experiments demonstrate our superiority in terms of quality and efficiency. • Low-light image enhancement is designed by atmospheric scattering model. • A novel method for saturation enhancement capacity is designed. • A model between saturation enhancement capacity and airlight is established. • Saturation and light are enhanced simultaneously. • The proposed method can dehaze for low haze images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Exposure difference network for low-light image enhancement.
- Author
-
Jiang, Shengqin, Mei, Yongyue, Wang, Peng, and Liu, Qingshan
- Subjects
- *
IMAGE intensifiers , *NETWORK performance , *CALIBRATION , *LIGHTING , *COLOR - Abstract
Low-light image enhancement aims to simultaneously improve the brightness and contrast of low-light images and recover the details of the visual content. This is a challenging task that makes typical data-driven methods suffer, especially when faced with severe information loss in extreme low-light conditions. In this work, we approach this task by proposing a novel exposure difference network. The proposed network generates a set of possible exposure corrections derived from the differences between synthesized images under different exposure levels, which are fused and adaptively combined with the raw input for light compensation. By modeling the intermediate exposure differences, our model effectively eliminates the redundancy existing in the synthesized data and offers the flexibility to handle image quality degradation resulting from varying levels of inadequate illumination. To further enhance the naturalness of the output image, we propose a global-aware color calibration module to derive low-frequency global information from inputs, which is further converted into a projection matrix to calibrate the RGB output. Extensive experiments show that our method can achieve competitive light enhancement performance both quantitatively and qualitatively. • A new exposure difference module is proposed to effectively utilize synthesized multi-exposure images. • A new global-aware color calibration module is proposed to calibrate the RGB values. • Experiments on recent challenging datasets show the appealing performance of our network. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. 颜色恢复和边缘保持的低照度图像 超分辨率重建方法.
- Author
-
郭 林, 陈亮亮, 程德强, 江 曼, 寇旗旗, and 钱建生
- Subjects
- *
PERCEIVED quality , *LUMINOUS flux , *NEIGHBORHOODS , *EDGES (Geometry) , *TEXTURES - Abstract
In order to overcome the problems of texture information loss, color offset distortion and reconstruction performance degradation in case of super resolution of images in low illumination environment, this paper proposed a super-resolution method of low illumination image based on color restoration and edge preservation. Based on anchored neighborhood regression (ANR), the method introduced the illuminance enhancement function of color restoration and edge preservation to improve the significance of image content and edge texture. Meanwhile, it selected the weighted least squares (WLS) as the central surround function to suppress the degradation of high-frequency features. Moreover, for the Y-channel component of YCbCr color space, it used the edge preserving illuminance enhancement function to calculate reflection component to further enhance the edge texture feature. The experimental results show that the proposed method achieves better visual effect. Compared with other methods, the PSNR of this method is improved by 63. 15%, SSIM is improved by 46. 86% and the perceived quality (PI) is improved by 4.12%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Subjective low-light image enhancement based on a foreground saliency map model.
- Author
-
Hao, Pengcheng, Yang, Meng, and Zheng, Nanning
- Subjects
IMAGE intensifiers ,MAPS - Abstract
Most existing low-light image enhancement methods enhance whole low-light image indiscriminately with the neglect of its subjective content, which may lead to over-enhancement and noise amplification problems in background. In this paper, we explore the challenging subjective low-light image enhancement problem. To this end, we first develop a novel foreground saliency detection model to measure the subjective content of low-light images. It is achieved by learning a saliency map and a depth map of low-light images based on CNN technique, and then fusing the two maps based on the Guided filter. Then, we incorporate the foreground saliency map model into a general retinex-based low-light image enhancement framework. Experimental results show that the proposed method well improves the subjective perception of low-light images without amplifying the noise in background compared with existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Low-Light Image Enhancement Based on Nonsubsampled Shearlet Transform
- Author
-
Manli Wang, Zijian Tian, Weifeng Gui, Xiangyang Zhang, and Wenqing Wang
- Subjects
Low-light image ,image enhancement ,noise suppression ,nonsubsampled shearlet transform ,image decomposition ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
To improve the observability of low-light images, a low-light image enhancement algorithm based on nonsubsampled shearlet transform (NSST) is presented (LIEST). The proposed algorithm can synchronously achieve contrast improvement, noise suppression, and the enhancement of specific directional details. An enhancement framework of low-light noisy images is first derived, and then, according to the framework, a low-light noisy image is decomposed into low-pass subband coefficients and bandpass direction subband coefficients by NSST. Then, in the NSST domain, an illumination map is estimated based on a bright channel of the low-pass subband coefficients, and noise is simultaneously suppressed by shrinking the bandpass direction subband coefficients. Finally, based on the estimated illumination map, the low-pass subband coefficients, and the shrunken bandpass direction subband coefficients, inverse NSST is implemented to achieve low-light image enhancement. Experiments demonstrate that the LIEST exhibits superior performance in improving contrast, suppressing noise, and highlighting specific details as compared to seven similar algorithms.
- Published
- 2020
- Full Text
- View/download PDF
48. Dual-purpose method for de-hazing and enhancement of underwater and low-light images.
- Author
-
Liu, Ke and Liang, Yongquan
- Abstract
There are many similarities between underwater images and low-light images, such as image blur and color distortion, but for such problems, there are few unified methods that can solve these problems well. This paper proposes a method based on multi-scale retinex color recovery (MSRCR) and color correction. First, color channel transfer (CCT) is used to preprocess the image. Then, a method of MSRCR and guided filtering is proposed to remove image fog. Finally, the statistical colorless slant correction fusion smoothing filter method is proposed to enhance the image, which improves the color contrast and sharpness of the image. Experiments have proved that the method proposed in this paper is effective in image de-hazing and enhancement. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
49. LIIS: Low-light image instance segmentation.
- Author
-
Li, Wei, Huang, Ya, Zhang, Xinyuan, and Han, Guijin
- Subjects
- *
IMAGE segmentation , *DIGITAL image processing , *IMAGE enhancement (Imaging systems) , *IMAGE denoising , *FEATURE extraction , *IMAGE fusion , *WAVELETS (Mathematics) - Abstract
Image features in low-light scenes become hard to distinguish and full of noise, which makes the performance of current popular instance segmentation models drastically degraded. We propose a two-stage approach for instance segmentation of low-light images with enhancement followed by segmentation. Stage-I corresponds to the Low-Light Image Enhancement (LLIE) process. We propose a post-processing Detail Enhancement Denoising Module (DEDM) to suppress degradation effects caused by the enhancement in the preprocessing stage. Stage-II represents the segmentation process of enhanced images. We construct the W-BCNet instance segmentation network and design a Wavelet Feature Fusion Module (WFFM) in the feature extraction stage to preserve more fine-grained features. We achieve great segmentation results on LIS, detailed comparative experiments and ablation studies show the advantages and excellent generalization ability of our model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Dual-Purpose Method for Underwater and Low-Light Image Enhancement via Image Layer Separation
- Author
-
Chenggang Dai, Mingxing Lin, Jingkun Wang, and Xiao Hu
- Subjects
Dual-purpose enhancement method ,underwater image ,low-light image ,image layer separation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Underwater and low-light images possess different characteristics; hence, few approaches have been exploited to jointly improve the visibility of these images. Herein, a dual-purpose method that achieves satisfactory performance in enhancing the visibility of both underwater and low-light images is proposed. In our study, the formation of these two types of images is described in a unified manner. Subsequently, an objective function is formulated, and several novel regularization terms are imposed on our optimization algorithm to separate incident light and reflectance as well as suppress intensive noise simultaneously. Next, post-processing algorithms are implemented to correct the color distortion of the incident light and improve the contrast of the reflectance. Ultimately, an enhanced image with clear visibility and natural appearance can be achieved by integrating the processed reflectance and incident light. Additionally, comprehensive tests were performed to compare the proposed method with other outstanding methods. Experiments on images captured in various scenes demonstrated the effectiveness of the proposed method, as evident in enhanced underwater and low-light images.
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.