10 results on '"Jinjiang Li"'
Search Results
2. Attention‐based multi‐channel feature fusion enhancement network to process low‐light images
- Author
-
Xintao Xu, Jinjiang Li, Zhen Hua, and Linwei Fan
- Subjects
Signal Processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Software - Published
- 2022
- Full Text
- View/download PDF
3. Attention‐based multi‐scale feature fusion for free‐space detection
- Author
-
Pengfei Song, Hui Fan, Jinjiang Li, and Feng Hua
- Subjects
Mechanical Engineering ,Transportation ,Law ,General Environmental Science - Published
- 2022
- Full Text
- View/download PDF
4. Two‐stage single image dehazing network using swin‐transformer
- Author
-
Xiaoling Li, Zhen Hua, and Jinjiang Li
- Subjects
Signal Processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Software - Published
- 2022
- Full Text
- View/download PDF
5. Two‐stage progressive residual learning network for multi‐focus image fusion
- Author
-
Jinjiang Li, Zhen Hua, and Haoran Wang
- Subjects
Image fusion ,Computer science ,business.industry ,Multi focus ,Residual ,Signal Processing ,Learning network ,Computer vision ,Computer Vision and Pattern Recognition ,Stage (hydrology) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software - Published
- 2021
- Full Text
- View/download PDF
6. LBP‐based progressive feature aggregation network for low‐light image enhancement
- Author
-
Jinjiang Li, Nana Yu, and Zhen Hua
- Subjects
Feature aggregation ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Image enhancement ,QA76.75-76.765 ,Signal Processing ,Photography ,Computer software ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,TR1-1050 ,business ,Software - Abstract
At night or in other low‐illumination environments, optical imaging devices cannot capture details and color information in images accurately because of the reduced number of photons captured and the low signal‐to‐noise ratio. Consequently, the image is very noisy with low contrast and inaccurate color information, which affects human visual perception and creates significant challenges in computer vision tasks. Low‐light image enhancement has great research value because it aims to reduce image noise and improve image quality. In this study, we propose an LBP‐based progressive feature aggregation network (P‐FANet) for low‐light image enhancement. The LBP feature has insensitivity to illumination, and it contains rich texture information. In the network, we input the LBP feature into each iteration of the network in an accompanying manner, which helps to restore some detailed information of the low‐light image. First, we input the low‐light image into the dual attention mechanism model to extract global features. Second, the extracted different features enter the feature aggregation module (FAM) for feature fusion. Third, we use the recurrent layer to share the features extracted at different stages, and use the residual layer to further extract deeper features. Finally, the enhanced image is output. The rationality of the method in this study has been verified through ablation experiments. Many experimental results show that the method in this study has greater advantages in subjective and objective evaluations compared with many other advanced methods.
- Published
- 2021
- Full Text
- View/download PDF
7. Underwater image enhancement via LBP‐based attention residual network
- Author
-
Jinjiang Li, ZhiXiong Huang, and Zhen Hua
- Subjects
Computer science ,business.industry ,Image enhancement ,Residual ,QA76.75-76.765 ,Signal Processing ,Photography ,Computer vision ,Computer software ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,Underwater ,TR1-1050 ,business ,Software - Abstract
Owing to the influence of light absorption and scattering in underwater environments, underwater images exhibit color deviation, low contrast and detail blur, and other degradations. This paper proposes an underwater image enhancement method combining a residual convolution network, local binary pattern (LBP), and self‐attention mechanism. The LBP operator processes the input underwater images. The LBP feature images and underwater images thus obtained constitute the network input. The network consists of three modules: a color correction module to remove the color deviation in underwater images, detail repair module to restore the integrity of details, and an LBP auxiliary enhancement module for global enhancement of image details. The correction and repair modules generate the correct color image and detailed supplement images, respectively. The final‐result image is obtained by superpositioning the two generated images. The experimental results confirm that our method can reproduce the bright colors and complete details of the visual effect, showing a significant improvement over other advanced methods in quantitative evaluation.
- Published
- 2021
- Full Text
- View/download PDF
8. Hierarchical guided network for low‐light image enhancement
- Author
-
Xiaomei Feng, Jinjiang Li, and Hui Fan
- Subjects
Computer science ,business.industry ,Image enhancement ,QA76.75-76.765 ,Signal Processing ,Photography ,Computer vision ,Computer software ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,TR1-1050 ,business ,Software - Abstract
Due to insufficient illumination in low‐light conditions, the brightness and contrast of the captured images are low, which affect the processing of other computer vision tasks. Low‐light enhancement is a challenging task that requires simultaneous processing of colour, brightness, contrast, artefacts and noise. To solve this problem, the authors apply the deep residual network to the low‐light enhancement task, and propose a hierarchical guided low‐light enhancement network. The key of this method is recombined hierarchical guided features through the feature aggregation module to realize low‐light enhancement. The network is based on the U‐Net network, and then hierarchically guided with the input pyramid branch in the encoding and decoding network. The input pyramid structure realizes multi‐level receptive fields and generates a hierarchical representation. The encoding and decoding structure concatenates the hierarchical features of the input pyramid and generates a set of hierarchical features. Finally, the feature aggregation module is used to fuse different features to achieve low‐light enhancement tasks. The effectiveness of the components is proved through ablation experiments. In addition, the authors are also evaluating on different data sets, and the experimental results show that the method proposed is superior to other methods in subjective and objective evaluation.
- Published
- 2021
- Full Text
- View/download PDF
9. Consistent image processing based on co‐saliency
- Author
-
Zhen Hua, Jinjiang Li, Xiangnan Ren, and Xinbo Jiang
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Human-Computer Interaction ,QA76.75-76.765 ,Artificial Intelligence ,Computational linguistics. Natural language processing ,Computer vision ,Computer software ,Computer Vision and Pattern Recognition ,Artificial intelligence ,P98-98.5 ,business ,Information Systems - Abstract
In a group of images, the recurrent foreground objects are considered as the key objects in the group of images. In co‐saliency detection, these are described as common saliency objects. The aim is to be able to naturally guide the user's gaze to these common salient objects. By guiding the user's gaze, users can easily find these common saliency objects without interference from other information. Therefore, a method is proposed for reducing user visual attention based on co‐saliency detection. Through the co‐saliency detection algorithm and matting algorithm for image preprocessing, the exact position of non‐common saliency objects (called Region of Interest here, i.e. ROI) in the image group can be obtained. In the attention retargeting algorithm, the internal features of the image to adjust the saliency of the ROI areas are considered. In the HSI colour space, the three components H, S, and I are adjusted separately. First, the hue distribution is constructed by the Dirac kernel function, and then the most similar hue distribution to the surrounding environment is selected as the best hue distribution of ROI areas. The S and I components can be set as the contrast difference between ROI areas and surrounding background areas according to the user's demands. Experimental results show that this method effectively reduces the ROI areas' attraction to the user's visual attention. Moreover, comparing this method with other methods, the saliency adjustment effect achieved is much better, and the processed image is more natural.
- Published
- 2021
- Full Text
- View/download PDF
10. Image reflection removal using end‐to‐end convolutional neural network
- Author
-
Jinjiang Li, Guihui Li, and Hui Fan
- Subjects
Artificial neural network ,Computer science ,business.industry ,Deep learning ,020206 networking & telecommunications ,Pattern recognition ,Context (language use) ,02 engineering and technology ,Convolutional neural network ,Object detection ,Image (mathematics) ,Reflection (mathematics) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software ,Network model - Abstract
Single image reflection removal is an ill-posed problem. To solve this problem, this study develops a network structure based on a deep encoder-decoder RRnet. Unlike most deep learning strategies applied in this context, the authors find that redundant information increases the difficulty of predicting images on the network; thus, the proposed method uses mixed reflection image cascaded edges as input to the network. The proposed network structure is divided into two parts: the first part is a deep convolutional encoder-decoder network. Its function uses the mixed reflection image and the target edge as input to predict the target layer. The second part is an identical encoder-decoder network structure. Its function uses the mixed reflection image and the reflection edge as input to predict the image reflection layer. In addition, the authors use joint loss to optimise the network model. To train the neural network, they also create an image dataset for reflection removal, which includes a true mixed reflection image and a synthetic mixed reflection image. They use four evaluation indicators to evaluate the proposed method and the other six methods. The experimental results indicate that the proposed method is superior to previous methods.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.