689 results on '"underwater image enhancement"'
Search Results
2. Frequency Modulated Deformable Transformer for Underwater Image Enhancement
- Author
-
Dukre, Adinath, Deshmukh, Vivek, Kulkarni, Ashutosh, Phutke, Shruti, Vipparthi, Santosh Kumar, Gonde, Anil B., Murala, Subrahmanyam, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Attentive Color Fusion Transformer Network (ACFTNet) for Underwater Image Enhancement
- Author
-
Wani, Mohd Ubaid, Khan, Md Raqib, Kulkarni, Ashutosh, Phutke, Shruti S., Vipparthi, Santosh Kumar, Murala, Subrahmanyam, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
4. UDUIE: Unpaired Domain-Irrelevant Underwater Image Enhancement
- Author
-
Luo, Han, Han, Lu, Yu, Zhibin, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Hadfi, Rafik, editor, Anthony, Patricia, editor, Sharma, Alok, editor, Ito, Takayuki, editor, and Bai, Quan, editor
- Published
- 2025
- Full Text
- View/download PDF
5. A typhoon optimization algorithm and difference of CNN integrated bi-level network for unsupervised underwater image enhancement.
- Author
-
Lin, Feng, Wang, Jian, Pedrycz, Witold, Zhang, Kai, and Ablameyko, Sergey
- Subjects
CONVOLUTIONAL neural networks ,OPTIMIZATION algorithms ,IMAGE intensifiers ,IMAGE processing ,TYPHOONS ,IMAGE enhancement (Imaging systems) - Abstract
Underwater image processing presents a greater challenge compared to its land-based counterpart due to inherent issues such as pervasive color distortion, diminished saturation, contrast degradation, and blurred content. Existing methods rooted in general image theory and models of image formation often fall short in delivering satisfactory results, as they typically consider only common factors and make assumptions that do not hold in complex underwater environments. Furthermore, the scarcity of extensive real-world datasets for underwater image enhancement (UIE) covering diverse scenes hinders progress in this field. To address these limitations, we propose an end-to-end unsupervised underwater image enhancement network, TOLPnet. It adopts a bi-level structure, utilizing the Typhoon Optimization (TO) algorithm at the upper level to optimize the super-parameters of the convolutional neural network (CNN) model. The lower level involves a Difference of CNN that employs trainable parameters for image input-output mapping. A novel energy-limited method is proposed for dehazing, and the Laplacian pyramid mechanism decomposes the image into high-frequency and low-frequency components for enhancement. The TO algorithm is leveraged to select enhancement strength and weight coefficients for loss functions. The cascaded CNN acts as a refining network. Experimental results on typical underwater image datasets demonstrate that our proposed method surpasses many state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. LM-CycleGAN: Improving Underwater Image Quality Through Learned Perceptual Image Patch Similarity and Multi-Scale Adaptive Fusion Attention.
- Author
-
Wu, Jiangyan, Zhang, Guanghui, and Fan, Yugang
- Abstract
The underwater imaging process is often hindered by high noise levels, blurring, and color distortion due to light scattering, absorption, and suspended particles in the water. To address the challenges of image enhancement in complex underwater environments, this paper proposes an underwater image color correction and detail enhancement model based on an improved Cycle-consistent Generative Adversarial Network (CycleGAN), named LPIPS-MAFA CycleGAN (LM-CycleGAN). The model integrates a Multi-scale Adaptive Fusion Attention (MAFA) mechanism into the generator architecture to enhance its ability to perceive image details. At the same time, the Learned Perceptual Image Patch Similarity (LPIPS) is introduced into the loss function to make the training process more focused on the structural information of the image. Experiments conducted on the public datasets UIEB and EUVP demonstrate that LM-CycleGAN achieves significant improvements in Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), Average Gradient (AG), Underwater Color Image Quality Evaluation (UCIQE), and Underwater Image Quality Measure (UIQM). Moreover, the model excels in color correction and fidelity, successfully avoiding issues such as red checkerboard artifacts and blurred edge details commonly observed in reconstructed images generated by traditional CycleGAN approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Adaptive rule-based colour component weight assignment strategy for underwater video enhancement.
- Author
-
Sonawane, Jitendra P., Patil, Mukesh D., and Birajdar, Gajanan K.
- Subjects
- *
IMAGE intensifiers , *PRINCIPAL components analysis , *LIGHT absorption , *LIGHT scattering , *COLOR - Abstract
Images and videos collected in an underwater environment often have low contrast, blur, and colour cast due to two significant sources of distortion; light scattering and absorption. In an underwater image/video, suspended particles attenuate red and blue components more than green channels. This article presents two adaptive weight allocation strategies based on rule assignment for red, green, and blue channels. Firstly, an improved balanced contrast enhancement technique (IBCET) is proposed using an adaptive contrast enhancement scheme based on colour component weight assignment. Secondly, a modified fuzzy contrast enhancement technique which obtains the intensification factor based on the weight of each component is developed. Finally, principal component analysis fusion is employed to improve the overall colour and contrast of the output video frames. Qualitative and quantitative evaluations on the standard underwater video dataset are demonstrated to validate the improved performance of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. UP‐GAN: Channel‐spatial attention‐based progressive generative adversarial network for underwater image enhancement.
- Author
-
Wang, Ning, Chen, Yanzheng, Wei, Yi, Chen, Tingkai, and Karimi, Hamid Reza
- Subjects
GENERATIVE adversarial networks ,IMAGE intensifiers ,LIGHT scattering ,ATTENUATION of light ,NOISE - Abstract
Focusing on severe color deviation, low brightness, and mixed noise caused by inherent scattering and light attenuation effects within underwater environments, an underwater‐attention progressive generative adversarial network (UP‐GAN) is innovated for underwater image enhancement (UIE). Salient contributions are as follows: (1) By elaborately devising an underwater background light estimation module via an underwater imaging model, the degradation mechanism can be sufficiently integrated to fuse prior information, which in turn saves computational burden on subsequent enhancement; (2) to suppress mixed noise and enhance foreground, simultaneously, an underwater dual‐attention module is created to fertilize skip connection from channel and spatial aspects, thereby getting rid of noise amplification within the UIE; and (3) by systematically combining with spatial consistency, exposure control, color constancy, color relative dispersion losses, the entire UP‐GAN framework is skillfully optimized by taking into account multidegradation factors. Comprehensive experiments conducted on the UIEB data set demonstrate the effectiveness and superiority of the proposed UP‐GAN in terms of both subjective and objective aspects. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Underwater image quality enhancement using fusion of adaptive colour correction and improved contrast enhancement strategy.
- Author
-
Raveendran, Smitha, Patil, Mukesh D., and Birajdar, Gajanan K.
- Subjects
- *
IMAGE fusion , *IMAGE intensifiers , *PRINCIPAL components analysis , *IMAGE enhancement (Imaging systems) , *COLOR , *ALGORITHMS - Abstract
Underwater images suffer from poor visibility due to low contrast, colour degradation, and noise, making them unsuitable for several visual tasks. While significant progress has been made recently in enhancing underwater images, achieving robust improvements remains a challenge. To address these issues, a new two-step framework combining adaptive colour correction and an improved local and global contrast enhancement algorithm are proposed to improve the quality of underwater images significantly. First, an adaptive colour correction strategy to compensate for degraded colour channels according to compensation factors for restoring the balance in the damaged colour channels is developed. Secondly, an improved contrast enhancement algorithm is presented to enhance the local and global contrast of underwater image and to sharpen the textural details. Finally, these two output images are fused together using principal component analysis (PCA) fusion. Experiments conducted on the established benchmarks for underwater image enhancement, namely the UIEB and EUVP datasets, have demonstrated the capability of the proposed algorithm to produce images of superior quality in terms of qualitative and quantitative performance measures compared to the latest state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. BG-YOLO: A Bidirectional-Guided Method for Underwater Object Detection.
- Author
-
Cao, Ruicheng, Zhang, Ruiteng, Yan, Xinyue, and Zhang, Jian
- Subjects
- *
OBJECT recognition (Computer vision) , *IMAGE intensifiers , *DETECTORS - Abstract
Degraded underwater images decrease the accuracy of underwater object detection. Existing research uses image enhancement methods to improve the visual quality of images, which may not be beneficial in underwater image detection and lead to serious degradation in detector performance. To alleviate this problem, we proposed a bidirectional guided method for underwater object detection, referred to as BG-YOLO. In the proposed method, a network is organized by constructing an image enhancement branch and an object detection branch in a parallel manner. The image enhancement branch consists of a cascade of an image enhancement subnet and object detection subnet. The object detection branch only consists of a detection subnet. A feature-guided module connects the shallow convolution layers of the two branches. When training the image enhancement branch, the object detection subnet in the enhancement branch guides the image enhancement subnet to be optimized towards the direction that is most conducive to the detection task. The shallow feature map of the trained image enhancement branch is output to the feature-guided module, constraining the optimization of the object detection branch through consistency loss and prompting the object detection branch to learn more detailed information about the objects. This enhances the detection performance. During the detection tasks, only the object detection branch is reserved so that no additional computational cost is introduced. Extensive experiments demonstrate that the proposed method significantly improves the detection performance of the YOLOv5s object detection network (the mAP is increased by up to 2.9%) and maintains the same inference speed as YOLOv5s (132 fps). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement.
- Author
-
Hu, Renzhi, Luo, Ting, Jiang, Guowei, Lin, Zhiqiang, and He, Zhouyan
- Subjects
CONVOLUTIONAL neural networks ,IMAGE intensifiers ,REFLECTANCE ,ALGORITHMS ,LIGHTING - Abstract
Underwater images are important for underwater vision tasks, yet their quality often degrades during imaging, promoting the generation of Underwater Image Enhancement (UIE) algorithms. This paper proposes a Dual-Channel Convolutional Neural Network (DC-CNN)-based quality assessment method to evaluate the performance of different UIE algorithms. Specifically, inspired by the intrinsic image decomposition, the enhanced underwater image is decomposed into reflectance with color information and illumination with texture information based on the Retinex theory. Afterward, we design a DC-CNN with two branches to learn color and texture features from reflectance and illumination, respectively, reflecting the distortion characteristics of enhanced underwater images. To integrate the learned features, a feature fusion module and attention mechanism are conducted to align efficiently and reasonably with human visual perception characteristics. Finally, a quality regression module is used to establish the mapping relationship between the extracted features and quality scores. Experimental results on two public enhanced underwater image datasets (i.e., UIQE and SAUD) show that the proposed DC-CNN method outperforms a variety of the existing quality assessment methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A multi-color and multistage collaborative network guided by refined transmission prior for underwater image enhancement.
- Author
-
Ouyang, Ting, Zhang, Yongjun, Zhao, Haoliang, Cui, Zhongwei, Yang, Yitong, and Xu, Yujie
- Subjects
- *
COLOR space , *IMAGE intensifiers , *LIGHT scattering , *FEATURE extraction , *PARAMETER estimation - Abstract
Due to the attenuation and scattering properties of light in underwater scenes, underwater images are generally subject to color deviations and low contrast, which is not conducive to the follow-up algorithms. To alleviate these two problems, we propose a multi-color and multistage collaborative network guided by refined transmission, called MMCGT, to accomplish the enhancement tasks. Specifically, we first design an accurate method of parameter estimation to derive transmission priors that are more suitable for underwater imaging, such as min–max conversion, low-pass filter-based estimation and saturation detection. Then, we propose a multistage and multi-color space collaborative network to decompose the underwater image enhancement task into more straightforward and controllable subtasks, including colorful feature extraction, color deviation detection, and image position information retention. Finally, we apply the derived transmission prior to the transmission-guided block of the network and effectively combine the well-designed physical-inconsistency loss with Charbonnier loss and VGG loss to guide the MMCGT to compensate for the quality-degraded regions better. Extensive experiments show that MMCGT achieves better evaluation results under the dual guidance of physics and deep learning than the competing methods in visual quality and quantitative metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A Two-Stage Approach for Underwater Image Enhancement Via Color-Contrast Enhancement and Trade-Off.
- Author
-
Xu, Huipu and Chen, Shuo
- Subjects
- *
IMAGE intensifiers , *ATTENUATION of light , *COMMONS , *LUMINOUS flux , *HISTOGRAMS - Abstract
The underwater imaging environment is very different from land, and some common land image enhancement methods are often not applicable to the underwater environment. This paper proposes a two-step underwater image enhancement method. White balance is a commonly used color correction method. In underwater environments, the traditional white balance method has certain limitations and results in severe color bias. This is caused by the faster attenuation of red light in underwater environments. We develop a new white balance method based on the assumption of the gray world method. A red correction module is embedded in the method, which is more suitable for underwater environments. For contrast correction, we design an illuminance correction method based on the Retinex model. The method significantly reduces the computational burden compared to traditional methods, while enhancing the brightness and contrast of the images. In addition, most of the current underwater image enhancement methods deal with color and contrast issues separately. However, these two factors influence each other, and processing them separately may lead to suboptimal results. Therefore, we investigate the relationship between color and contrast and propose a trade-off method. Our method integrates color and contrast within a histogram framework, achieving a balanced enhancement of both aspects. To avoid chance, we utilized four datasets, each containing 800 randomly selected images for metric testing. On the five non-referential metrics, three firsts and two seconds were ranked. Our method ranked second on two referenced metrics. Superior results were also achieved in runtime comparisons. Finally, we further demonstrate the superiority of our method through detailed demonstrations and ablation experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Unveiling the hidden depths: advancements in underwater image enhancement using deep learning and auto-encoders.
- Author
-
Bantupalli, Jaisuraj, Kachapilly, Amal John, Roy, Sanjukta, and L. K., Pavithra
- Abstract
Underwater images hold immense value for various fields, including marine biology research, underwater infrastructure inspection, and exploration activities. However, capturing high-quality images underwater proves challenging due to light absorption and scattering leading to color distortion, blue green hues. Additionally, these phenomena decrease contrast and visibility, hindering the ability to extract valuable information. Existing image enhancement methods often struggle to achieve accurate color correction while preserving crucial image details. This article proposes a novel deep learning-based approach for underwater image enhancement that leverages the power of autoencoders. Specifically, a convolutional autoencoder is trained to learn a mapping from the distorted colors present in underwater images to their true, color-corrected counterparts. The proposed model is trained and tested using the Enhancing Underwater Visual Perception (EUVP) and Underwater Image Enhancement Benchmark (UIEB) datasets. The performance of the model is evaluated and compared with various traditional and deep learning based image enhancement techniques using the quality measures structural similarity index (SSIM), peak signal-to-noise ratio (PSNR) and mean squared error (MSE). This research aims to address the critical limitations of current techniques by offering a superior method for underwater image enhancement by improving color fidelity and better information extraction capabilities for various applications. Our proposed color correction model based on encoder decoder network achieves higher SSIM and PSNR values. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Multi-Scale and Multi-Layer Lattice Transformer for Underwater Image Enhancement.
- Author
-
Hsu, Wei-Yen and Hsu, Yu-Yu
- Subjects
LIGHT absorption ,IMAGE reconstruction ,IMAGE intensifiers ,LIGHT scattering ,WAVELENGTHS - Abstract
Underwater images are often subject to color deviation and a loss of detail due to the absorption and scattering of light. The challenge of enhancing underwater images is compounded by variations in wavelength and distance attenuation, as well as color deviation that exist across different scales and layers, resulting in different degrees of color deviation, attenuation, and blurring. To address these issues, we propose a novel multi-scale and multi-layer lattice transformer (MMLattFormer) to effectively eliminate artifacts and color deviation, prevent over-enhancement, and preserve details across various scales and layers, thereby achieving more accurate and natural results in underwater image enhancement. The proposed MMLattFormer model integrates the advantage of LattFormer to enhance global perception with the advantage of "multi-scale and multi-layer" configuration to leverages the differences and complementarities between features of various scales and layers to boost local perception. The proposed MMLattFormer model is comprised of multi-scale and multi-layer LattFormers. Each LattFormer primarily encompasses two modules: Multi-head Transposed-attention Residual Network (MTRN) and Gated-attention Residual Network (GRN). The MTRN module enables cross-pixel interaction and pixel-level aggregation in an efficient manner to extract more significant and distinguishable features, whereas the GRN module can effectively suppress under-informed or redundant features and retain only useful information, enabling excellent image restoration exploiting the local and global structures of the images. Moreover, to better capture local details, we introduce depthwise convolution in these two modules before generating global attention maps and decomposing images into different features to better capture the local context in image features. The qualitative and quantitative results indicate that the proposed method outperforms state-of-the-art approaches in delivering more natural results. This is evident in its superior detail preservation, effective prevention of over-enhancement, and successful removal of artifacts and color deviation on several public datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Development of Attentive-Based Trans-UNet and Cycle Generative Adversarial Networks for Underwater Image Enhancement.
- Author
-
Ajanya, P. and Meera, S.
- Subjects
- *
GENERATIVE adversarial networks , *DEEP learning , *IMAGE intensifiers , *CONTRAST effect , *RESEARCH personnel - Abstract
Researchers across the globe have been investigating underwater photographs and a way to capture pictures with outstanding clarity for the past couple of decades. Also, improving the obtained photographs is an exhausting task. Usually, applied underwater image-capturing devices are unable to acquire high-resolution photos underwater, and their maintenance is extremely costly. The underwater images include multiple flaws because of biological processes like attenuation and scattering. These photographs experience color distortion, blurriness, and low contrast effects. Technologies that utilize deep learning have become more popular among several research studies and have gradually grown in impact on society. Several techniques demand sets of training photos, but gathering such expected sets can be challenging because of the complex nature of the underwater environment. Generating and restoring an image from water is a difficult task that has gained prominence in recent days. By lowering graininess, adjusting, and refining the photos using deep learning models, the major goal is to enhance underwater images. To accomplish this objective, an intelligent attention-based deep learning model is proposed. In the first stage, the unrefined images are gathered from typical data sources. Further, the collected underwater images are fed into the model of Attentive-based Trans-UNet-CycleGAN (ATUNet-CGAN), where the Transformer-based UNet model is integrated with the Cycle Generative Adversarial Networks (GANs). Also, the attention mechanism process is involved in Trans-UNet-CycleGAN for improving the superiority of submarine images. Finally, the performance of the model is validated using different metrics and correlated among baseline approaches. Therefore, the proposed methodology outperforms the exploitation of better enhancement of image quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Underwater Optical Imaging: Methods, Applications and Perspectives.
- Author
-
Hao, Yansheng, Yuan, Yaoyao, Zhang, Hongman, and Zhang, Ze
- Subjects
- *
IMAGE reconstruction , *OPTICAL images , *IMAGE intensifiers , *TOPOGRAPHICAL surveying , *DEEP learning , *SOFTWARE measurement - Abstract
Underwater optical imaging is essential for exploring the underwater environment to provide information for planning and regulating underwater activities in various underwater applications, such as aquaculture farm observation, underwater topographical survey, and underwater infrastructure monitoring. Thus, there is a need to investigate the underwater imaging process and propose clear and long-range underwater optical imaging methods to fulfill the demands of academia and industry. In this manuscript, we classify the eighteen most commonly used underwater optical imaging methods into two groups regarding the imaging principle, (1) hardware and (2) software-based methods, each with an explanation of the theory, features, and applications. Furthermore, we also discuss the current challenges and future directions for improving the performance of current methods, such as improving the accuracy of underwater image formation model estimation, enlarging the underwater image dataset, proposing comprehensive underwater imaging evaluation metrics, estimating underwater depth and integrating different methods (e.g., hardware- and software-based methods for computational imaging) to promote the imaging performance not only in the laboratory but also in practical underwater scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Adaptive contrast enhancement for underwater image using imaging model guided variational framework.
- Author
-
Dai, Chenggang and Lin, Mingxing
- Subjects
IMAGE intensifiers ,CORRUPTION ,ABSORPTION ,COLOR - Abstract
Underwater images are typically characterized by blurry details, poor contrast, and color distortions owing to absorption and scattering effects, which limits the performance of several high-level tasks. However, most existing approaches are incapable of removing these multiple corruptions elegantly. Hence, an imaging model guided variational framework is proposed to simultaneously address the corruptions. In this study, underwater imaging model is imposed on the variational framework to correct the deviated color. The differences of gray values in channel and space dimensions are proposed to maximize the contrast of enhanced images. Furthermore, an adaptive weight function is designed to address the issue of excessive enhancement. Finally, a coarse-to-fine strategy is employed to efficiently solving the variational framework. Owing to the reasonable framework, the proposed method can be well generalized to sandstorm images and hazy images. The provided experiments demonstrate that the proposed method presents the highest CIEDE2000, UIQM, and FDUM scores, i.e., 40.42, 0.82, and 5.03. These extensive experiments validate the superiority of proposed method in improving the quality of underwater images from both qualitative and quantitative perspectives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. 基于轻量级非线性无激活网络的水下图像增强.
- Author
-
黄宏涛 and 袁红春
- Subjects
- *
FEATURE extraction , *IMAGE intensifiers , *IMAGE processing , *DEEP learning , *COLOR , *IMAGE enhancement (Imaging systems) - Abstract
To overcome the common issues of color distortion and low contrast in underwater image processing, this study developed an innovative underwater image enhancement technique based on a lightweight non-linear activation free network. The core feature of this technique is the use of multiple cascaded non-linear activation free modules without traditional activation functions,significantly enhancing the flow of information and the efficiency of feature extraction. Additionally, the model integrates an innovative layer attention mechanism, which effectively identifies and optimizes feature dependencies between different layers, enhancing the expression of key information through dynamic adjustment of feature weights. To comprehensively evaluate the performance of the proposed method, detailed experiments were conducted on the Large Scale Underwater Image (LSUI) dataset. Compared with leading models such as FUnIE-GAN and Shallow-UWnet,our model demonstrated a significant advantage in structural similarity index (SSIM),with improvements of 8. 17% and 4. 13% respectively,markedly enhancing the color accuracy and detail retention of the images. Furthermore,the parameter count of our model was significantly reduced, decreasing by 98% and 50% respectively, greatly enhancing the model′s practicality and deployment capabilities in environments with limited computational resources. The results of this study confirm the effectiveness of this enhancement technique in addressing key visual challenges in underwater imaging and also demonstrate its potential for application in extreme visual environments. By introducing this lightweight and efficient image enhancement approach,new pathways have been opened for the further development and innovation of underwater image processing technologies,laying a solid foundation for the widespread deployment of underwater vision systems in practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Underwater image enhancement via color conversion and white balance-based fusion.
- Author
-
Xu, Hanning, Mu, Pan, Liu, Zheyuan, and Cheng, Shichao
- Subjects
- *
CONVOLUTIONAL neural networks , *IMAGE intensifiers , *COMPUTER vision , *LIGHT absorption , *REFRACTION (Optics) , *IMAGE enhancement (Imaging systems) - Abstract
The task of enhancing underwater images presents a significant challenge due to the refraction and absorption of light in water, resulting in images that often appear bluish or greenish with diminished contrast. Furthermore, the scarcity of underwater datasets complicates the achievement of robust generalization capacity to address complex underwater scenarios. In this study, we introduce generalized underwater image enhancement model with color-guided adaptive feature fusion (GU-CAFF), designed to rectify various degraded underwater images, utilizing a minimal amount of training data. GU-CAFF primarily comprises two modules: a multi-level color-feature encoder (MCE) and a white balance-based fusion (WBF) module. The MCE integrates physical models to extract features from underwater images exhibiting different color deviations, emphasizing essential features while preserving their structural information. In addition, WBF, in conjunction with a statistical model, is proposed to fuse the features extracted by the encoder and rectify the color distortion of specific pixels in degraded images. The proposed method can be trained once on our developed dataset and exhibits robust generalization capabilities on other datasets. Quantitative and qualitative comparisons are conducted with several state-of-the-art underwater image enhancement models, demonstrating our superior performance in enhancing underwater images.The source code will be available at https://github.com/shiningZZ/GU-CAFF. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A novel highland and freshwater-circumstance dataset: advancing underwater image enhancement.
- Author
-
Li, Zhen, Yan, Kaixiang, Zhou, Dongming, Wang, Changcheng, and Quan, Jiarui
- Subjects
- *
BODIES of water , *IMAGE intensifiers , *MARINE engineering , *SEAWATER , *RESEARCH personnel - Abstract
As an important underlying visual processing task, underwater image enhancement techniques have received a lot of attention from researchers due to their importance in marine engineering and lake ecosystem optimization. However, various underwater image enhancement algorithms have been proposed to be evaluated mainly with marine water body datasets, and it is not clear whether these algorithms can be performed on datasets collected from inland lakes in the freshwaters. To bridge this gap, for the first time, we construct an underwater image dataset for highland and freshwater-circumstances (HFUI) using 1000 real images to complement the underwater image datasets. In addition, we propose an unsupervised underwater image enhancement algorithm (HUFI-Net) specifically for this dataset to correct the sharpness and color of the images. This algorithm, as well as current advanced underwater image enhancement algorithms, was investigated qualitatively and quantitatively using this dataset to evaluate the effectiveness and limitations of various algorithms and to provide novel ideas for future underwater image enhancement research. We also further validate the generalization and effectiveness of the algorithm on the underwater image datasets of the UIEB and the RUIE. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Adaptive variational decomposition for water-related optical image enhancement.
- Author
-
Zhou, Jingchun, Chen, Shuhan, Zhang, Dehuan, He, Zongxin, Lam, Kin-Man, Sohel, Ferdous, and Vivone, Gemine
- Subjects
- *
IMAGE intensifiers , *ATTENUATION of light , *OPTICAL images , *REFLECTANCE , *PRIOR learning , *IMAGE enhancement (Imaging systems) - Abstract
Underwater images suffer from blurred details and color distortion due to light attenuation from scattering and absorption. Current underwater image enhancement (UIE) methods overlook the effects of forward scattering, leading to difficulties in addressing low contrast and blurriness. To address the challenges caused by forward and backward scattering, we propose a novel variational-based adaptive method for removing scattering components. Our method addresses both forward and backward scattering and effectively removes interference from suspended particles, significantly enhancing image clarity and contrast for underwater applications. Specifically, our method employs a backward scattering pre-processing method to correct erroneous pixel interferences and histogram equalization to remove color bias, improving image contrast. The backward scattering noise removal method in the variational model uses horizontal and vertical gradients as constraints to remove backward scattering noise. However, it can remove a small portion of forward scattering components caused by light deviation. We develop an adaptive method using the Manhattan Distance to completely remove forward scattering. Our approach integrates prior knowledge to construct penalty terms and uses a fast solver to achieve strong decoupling of incident light and reflectance. We effectively enhance image contrast and color correction by combining variational methods with histogram equalization. Our method outperforms state-of-the-art methods on the UIEB dataset, achieving UCIQE and URanker scores of 0.636 and 2.411, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Underwater image enhancement based on weighted guided filter image fusion.
- Author
-
Xiang, Dan, Wang, Huihua, Zhou, Zebin, Zhao, Hao, Gao, Pan, Zhang, Jinwen, and Shan, Chun
- Abstract
An underwater image enhancement technique based on weighted guided filter image fusion is proposed to address challenges, including optical absorption and scattering, color distortion, and uneven illumination. The method consists of three stages: color correction, local contrast enhancement, and fusion algorithm methods. In terms of color correction, basic correction is achieved through channel compensation and remapping, with saturation adjusted based on histogram distribution to enhance visual richness. For local contrast enhancement, the approach involves box filtering and a variational model to improve image saturation. Finally, the method utilizes weighted guided filter image fusion to achieve high visual quality underwater images. Additionally, our method outperforms eight state-of-the-art algorithms in no-reference metrics, demonstrating its effectiveness and innovation. We also conducted application tests and time comparisons to further validate the practicality of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Enhancing Underwater Images through Multi-Frequency Detail Optimization and Adaptive Color Correction.
- Author
-
Gao, Xiujing, Jin, Junjie, Lin, Fanchao, Huang, Hongwu, Yang, Jiawei, Xie, Yongfeng, and Zhang, Biwen
- Subjects
COLOR space ,ATTENUATION of light ,UNDERWATER photography ,IMAGE intensifiers ,NETWORK performance ,IMAGE enhancement (Imaging systems) - Abstract
This paper presents a novel underwater image enhancement method addressing the challenges of low contrast, color distortion, and detail loss prevalent in underwater photography. Unlike existing methods that may introduce color bias or blur during enhancement, our approach leverages a two-pronged strategy. First, an Efficient Fusion Edge Detection (EFED) module preserves crucial edge information, ensuring detail clarity even in challenging turbidity and illumination conditions. Second, a Multi-scale Color Parallel Frequency-division Attention (MCPFA) module integrates multi-color space data with edge information. This module dynamically weights features based on their frequency domain positions, prioritizing high-frequency details and areas affected by light attenuation. Our method further incorporates a dual multi-color space structural loss function, optimizing the performance of the network across RGB, Lab, and HSV color spaces. This approach enhances structural alignment and minimizes color distortion, edge artifacts, and detail loss often observed in existing techniques. Comprehensive quantitative and qualitative evaluations using both full-reference and no-reference image quality metrics demonstrate that our proposed method effectively suppresses scattering noise, corrects color deviations, and significantly enhances image details. In terms of objective evaluation metrics, our method achieves the best performance in the test dataset of EUVP with a PSNR of 23.45, SSIM of 0.821, and UIQM of 3.211, indicating that it outperforms state-of-the-art methods in improving image quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Unsupervised Controllable Enhancement of Underwater Images Based on Multi-Domain Attribute Representation Disentanglement
- Author
-
Shijian ZHOU, Pengli ZHU, Siyuan LIU, and Han CHEN
- Subjects
underwater image enhancement ,multi-domain attribute representation disentanglement ,unsupervised ,distortion ,feature interpolation ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 - Abstract
The unsupervised enhancement technology for underwater images is mainly oriented towards specific distortion factors and exhibits limited adaptability towards various underwater distorted images. The content attribute(structure) of the image will migrate and change with the style attribute(appearance), resulting in an uncontrolled enhancement effect and affecting the stability and accuracy of subsequent environmental perception and processing. To address this issue, an unsupervised controllable enhancement method of underwater images based on multi-domain attribute representation disentanglement(MARD) was proposed in the paper. First, a framework of multi-domain unified representation disentanglement cycle-consistent adversarial translations was designed, thereby enhancing the algorithm’s adaptability to multiple distortion factors. Subsequently, a dual-encoding and conditional decoding network structure was constructed. Finally, a series of losses for MARD was designed to enhance the independence and controllability of quality, content, style, and other attribute representations. Experimental results demonstrate that the proposed algorithm not only eliminates various distortions such as color aberration, blur, noise, and low illumination in underwater images but also quantify the image style codes by linear interpolation for controllable enhancement of underwater images.
- Published
- 2024
- Full Text
- View/download PDF
26. Scattered Light Compensation Combined with Color Preservation and Contrast Balance for Underwater Image Enhancement
- Author
-
Zemeng NING, Sen LIN, and Xingran LI
- Subjects
unmanned undersea system ,color correction ,underwater image enhancement ,texture enhancement ,scattering light compensation ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 - Abstract
In view of color deviation, low contrast, and blurring in underwater images, an underwater image enhancement method based on scattered light compensation combined with color preservation and contrast balance was proposed. Firstly, the relative total variational model was used to separate the structure and texture layer of the image. Specifically, the color deviation of the structural layer was corrected by defining a compensation coefficient error matrix based on the RGB spatial mapping, and the texture layer was enhanced by filtering separation and fusion to prevent the initial feature loss of the image. The enhanced texture layer was superimposed with the structural layer to obtain the output of the first layer. Besides, in the contrast balance module, color preservation-contrast limiting adaptive histogram equalization based on spatial transformation was performed to further improve the contrast and brightness. Finally, the enhanced results of the two layers were fused to output the image. Comparison conducted on different datasets verifies that the proposed method has better performance in balancing color deviation, enhancing details, and deblurring, which has practical application value in unmanned undersea system-based vision tasks.
- Published
- 2024
- Full Text
- View/download PDF
27. MACT: Underwater image color correction via Minimally Attenuated Channel Transfer.
- Author
-
Zhang, Weibo, Wang, Hao, Ren, Peng, and Zhang, Weidong
- Abstract
Underwater images usually show reduced quality due to the underwater environment where light propagation is affected by scattering and absorption, severely limiting the effectiveness of underwater images in practical applications. To effectively deal with the problem of poor underwater image quality, this paper proposes an innovative Minimally Attenuated Channel Transfer (MACT) method that effectively recovers color distortion and enhances the visibility of underwater images. In underwater images captured from natural scenes, specific color channels are often observed to be severely attenuated. To compensate for the information loss caused by channel attenuation, our color correction method selects the channel with the most minor degradation in the degraded image as the reference channel. Subsequently, we employ the reference channel and the color compensation factor obtained by dual-mean difference to perform adaptive color compensation on different color-degraded channels. Finally, we balance the histogram distribution of the compensated color channels by a linear stretching operation. Extensive experimental results on three benchmark datasets demonstrate that our preprocessing method achieves better performance. The project page is available at https://www.researchgate.net/publication/384252681_2024-MACT. • We propose a double mean difference method for color compensation. • We enhance color distribution by determining the extreme values of color channels. • Our method can preprocess images with defogging. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
28. Fast fusion-based underwater image enhancement with adaptive color correction and contrast enhancement.
- Author
-
Yao, Xinzhe, Liang, Xiuman, Yu, Haifeng, and Liu, Zhendong
- Abstract
Since the scattering of suspension particles and the absorption of light by the water, single underwater image often suffers from some serious color cast and hazing problems, which hinder the application of advance vision technology in the underwater. To address these degradation problems, we propose a fast fusion-based method for underwater image enhancement. First, an adaptive color correction module with color cast judgment is designed to adjust the color cast of different scenes. Then, we design the dehazed and detail enhancement module to adjust the luminance channel of the image. Finally, a Laplace decomposition and multi-scale fusion strategy based on luminance channel is proposed to enhance the comprehensive contrast of the image. Our method is not dependent on complex physical imaging models while processing only at the channel level, which reduces the running time of the algorithm. The experimental results demonstrate that our algorithm outperforms the state-of-the-art algorithms of the same field. Besides, our method is equally applicable to the images of foggy and sandstorm. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
29. Underwater image enhancement through color deviation detection-guided peak flattening.
- Author
-
Zhu, Jie, Wang, Huibin, Chen, Zhe, Zhang, Lili, and Zhang, Min
- Abstract
Underwater images frequently exhibit color deviation and blurring. Color deviation arises from the selective spectral absorption of light, whereas blurring results from particle scattering within the water. These challenges impede the performance of high-level visual perception tasks. To improve the underwater image quality, we propose an adaptive method for underwater image enhancement that employs color deviation detection-guided peak flattening. Utilizing six features identified through statistical analysis of the latest underwater real-image datasets, we develop a color deviation detection model for underwater images, which employs ensemble learning to quantify the extent of color deviation. Subsequently, we devise the peak flattening algorithm to achieve histogram adaptive partition conversion. The conversion range is determined by the estimated color deviation value, eliminating the need for iteration. For channels with severe color deviation, we employ the weighted fusion to restore partial gray distribution. Comparisons with state-of-the-art methods demonstrate that the proposed method significantly improves contrast and color fidelity, particularly in images with severe degradation. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
30. UICE-MIRNet guided image enhancement for underwater object detection
- Author
-
Pratima Sarkar, Sourav De, Sandeep Gurung, and Prasenjit Dey
- Subjects
Computer Vision ,Deep Neural Network ,MIRNet ,Object Detection ,Underwater Image Enhancement ,YOLOv4 ,Medicine ,Science - Abstract
Abstract Underwater object detection is a crucial aspect of monitoring the aquaculture resources to preserve the marine ecosystem. In most cases, Low-light and scattered lighting conditions create challenges for computer vision-based underwater object detection. To address these issues, low-colorfulness and low-light image enhancement techniques are explored. This work proposes an underwater image enhancement technique called Underwater Image Colorfulness Enhancement MIRNet (UICE-MIRNet) to increase the visibility of small, multiple, dense objects followed by underwater object detection using YOLOv4. UICE-MIRNet is a specialized version of classical MIRNet, which handles random increments of brightness features to address the visibility problem. The proposed UICE-MIRNET restrict brightness and also works on the improvement of the colourfulness of underwater images. UICE-MIRNet consists of an Underwater Image-Colorfulness Enhancement Block (UI-CEB). This block enables the extraction of low-colourful areas from underwater images and performs colour correction without affecting contextual information. The primary characteristics of UICE-MIRNet are the extraction of multiple features using a convolutional stream, feature fusion to facilitate the flow of information, preservation of contextual information by discarding irrelevant features and increasing colourfulness through proper feature selection. Enhanced images are then trained using the YOLOv4 object detection model. The performance of the proposed UICE-MIRNet method is quantitatively evaluated using standard metrics such as UIQM, UCIQE, entropy, and PSNR. The proposed work is compared with many existing image enhancement and restoration techniques. Also, the performance of object detection is assessed using precision, recall, and mAP. Extensive experiments are conducted on two standard datasets, Brackish and Trash-ICRA19, to demonstrate the performance of the proposed work compared to existing methods. The results show that the proposed model outperforms many state-of-the-art techniques.
- Published
- 2024
- Full Text
- View/download PDF
31. UICE-MIRNet guided image enhancement for underwater object detection.
- Author
-
Sarkar, Pratima, De, Sourav, Gurung, Sandeep, and Dey, Prasenjit
- Subjects
- *
OBJECT recognition (Computer vision) , *ARTIFICIAL neural networks , *IMAGE reconstruction , *COMPUTER vision , *IMAGE intensifiers - Abstract
Underwater object detection is a crucial aspect of monitoring the aquaculture resources to preserve the marine ecosystem. In most cases, Low-light and scattered lighting conditions create challenges for computer vision-based underwater object detection. To address these issues, low-colorfulness and low-light image enhancement techniques are explored. This work proposes an underwater image enhancement technique called Underwater Image Colorfulness Enhancement MIRNet (UICE-MIRNet) to increase the visibility of small, multiple, dense objects followed by underwater object detection using YOLOv4. UICE-MIRNet is a specialized version of classical MIRNet, which handles random increments of brightness features to address the visibility problem. The proposed UICE-MIRNET restrict brightness and also works on the improvement of the colourfulness of underwater images. UICE-MIRNet consists of an Underwater Image-Colorfulness Enhancement Block (UI-CEB). This block enables the extraction of low-colourful areas from underwater images and performs colour correction without affecting contextual information. The primary characteristics of UICE-MIRNet are the extraction of multiple features using a convolutional stream, feature fusion to facilitate the flow of information, preservation of contextual information by discarding irrelevant features and increasing colourfulness through proper feature selection. Enhanced images are then trained using the YOLOv4 object detection model. The performance of the proposed UICE-MIRNet method is quantitatively evaluated using standard metrics such as UIQM, UCIQE, entropy, and PSNR. The proposed work is compared with many existing image enhancement and restoration techniques. Also, the performance of object detection is assessed using precision, recall, and mAP. Extensive experiments are conducted on two standard datasets, Brackish and Trash-ICRA19, to demonstrate the performance of the proposed work compared to existing methods. The results show that the proposed model outperforms many state-of-the-art techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. FishDet-YOLO: Enhanced Underwater Fish Detection with Richer Gradient Flow and Long-Range Dependency Capture through Mamba-C2f.
- Author
-
Yang, Chen, Xiang, Jian, Li, Xiaoyong, and Xie, Yunjie
- Subjects
MARINE ecosystem health ,OBJECT recognition (Computer vision) ,FISH diversity ,SPECIES diversity ,FISH habitats ,MARINE biodiversity - Abstract
The fish detection task is an essential component of marine exploration, which helps scientists monitor fish population numbers and diversity and understand changes in fish behavior and habitat. It also plays a significant role in assessing the health of marine ecosystems, formulating conservation measures, and maintaining biodiversity. However, there are two main issues with current fish detection algorithms. First, the lighting conditions underwater are significantly different from those on land. In addition, light scattering and absorption in water trigger uneven illumination, color distortion, and reduced contrast in images. The accuracy of detection algorithms can be affected by these lighting variations. Second, the wide variation of fish species in shape, color, and size brings about some challenges. As some fish have complex textures or camouflage features, it is difficult to differentiate them using current detection algorithms. To address these issues, we propose a fish detection algorithm—FishDet-YOLO—through improvement in the YOLOv8 algorithm. To tackle the complexities of underwater environments, we design an Underwater Enhancement Module network (UEM) that can be jointly trained with YOLO. The UEM enhances the details of underwater images via end-to-end training with YOLO. To address the diversity of fish species, we leverage the Mamba model's capability for long-distance dependencies without increasing computational complexity and integrate it with the C2f from YOLOv8 to create the Mamba-C2f. Through this design, the adaptability in handling complex fish detection tasks is improved. In addition, the RUOD and DUO public datasets are used to train and evaluate FishDet-YOLO. FishDet-YOLO achieves mAP scores of 89.5% and 88.8% on the test sets of RUOD and DUO, respectively, marking an improvement of 8% and 8.2% over YOLOv8. It also surpasses recent state-of-the-art general object detection and underwater fish detection algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Effective adversarial transfer learning for underwater image enhancement with hybrid losses.
- Author
-
Yang, Hanwei, Peng, Weilong, Yao, Jiamin, and Ye, Xijun
- Abstract
Underwater images often suffer from degradation caused by light attenuation and turbidity, resulting in poor image quality. Due to the domain gap between source domain and target domain and the lack of pair images in certain datasets, most learning-based methods exhibit limited performance in preserving image details and ensuring model stability, with poor generalization ability. To address these challenges, this paper proposes an effective adversarial transfer learning method for underwater image enhancement with specially designed hybrid loss. Our approach employs domain adaptation techniques based on adversarial transfer learning to automatically learn image features and patterns from both underwater and air images. Specifically, we design domain-adaptive generators to establish forward and backward processes between the source and target domains. Additionally, we introduce hybrid losses for domain adaptation, facilitating effective enhancement of underwater images. The forward generator demonstrates promising generalization. Experimental results demonstrate the high feasibility and effectiveness of our proposed method in enhancing underwater images, offering a powerful solution for both underwater and air images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Self-organized underwater image enhancement.
- Author
-
Wang, Hao, Zhang, Weibo, and Ren, Peng
- Subjects
- *
IMAGE intensifiers , *COLOR vision , *REINFORCEMENT learning - Abstract
Underwater images captured in diverse underwater scenes exhibit varying types and degrees of degradation, including color deviations, low contrast, blurry details, etc. Single image enhancement methods tend to insufficiently address the diverse degradation issues, resulting in inappropriate results that do not align well with human visual perception or underwater color prior. To overcome these deficiencies, we develop a novel reinforcement learning framework that selects a sequence of image enhancement methods and configures their parameters in a self-organized manner for the purpose of underwater image enhancement. In contrast to end-to-end deep learning-based black-box mechanisms, the novel framework operates in a white-box fashion where the mechanisms for the method selection and parameter configuration are transparent. Furthermore, our framework incorporates the human visual perception and the underwater color prior into non-reference score increments for rewarding the underwater image enhancement. This breaks through the training limit imposed by volunteer-selected enhanced images as references. Comprehensive qualitative and quantitative experiments ultimately demonstrate that our framework outperforms nine state-of-the-art underwater image enhancement methods in terms of visual quality, and achieves better performance in five underwater image quality assessment metrics on three underwater image datasets. We release our code at https://gitee.com/wanghaoupc/Self_organized_UIE. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. CodeUNet: Autonomous underwater vehicle real visual enhancement via underwater codebook priors.
- Author
-
Wang, Linling, Xu, Xiaoyan, An, Shunmin, Han, Bing, and Guo, Yi
- Subjects
- *
AUTONOMOUS underwater vehicles , *IMAGE intensifiers , *PRIOR learning , *EVALUATION methodology , *GENERALIZATION - Abstract
The vision enhancement of autonomous underwater vehicle (AUV) has received increasing attention and rapid development in recent years. However, existing methods based on prior knowledge struggle to adapt to all scenarios, while learning-based approaches lack paired datasets from real-world scenes, limiting their enhancement capabilities. Consequently, this severely hampers their generalization and application in AUVs. Besides, the existing deep learning-based methods largely overlook the advantages of prior knowledge-based approaches. To address the aforementioned issues, a novel architecture called CodeUNet is proposed in this paper. Instead of relying on physical scattering models, a real-world scene vision enhancement network based on a codebook prior is considered. First, the VQGAN is pretrained on underwater datasets to obtain a discrete codebook, encapsulating the underwater priors (UPs). The decoder is equipped with a novel feature alignment module that effectively leverages underwater features to generate clean results. Then, the distance between the features and the matches is recalibrated by controllable matching operations, enabling better matching. Extensive experiments demonstrate that CodeUNet outperforms state-of-the-art methods in terms of visual quality and quantitative metrics. The testing results of geometric rotation, SIFT salient point detection, and edge detection applications are shown in this paper, providing strong evidence for the feasibility of CodeUNet in the field of autonomous underwater vehicles. Specifically, on the full reference dataset, the proposed method outperforms most of the 14 state-of-the-art methods in four evaluation metrics, with an improvement of up to 3.7722 compared to MLLE. On the no-reference dataset, the proposed method achieves excellent results, with an improvement of up to 0.0362 compared to MLLE. Links to the dataset and code for this project can be found at: https://github.com/An-Shunmin/CodeUNet. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. MSFE-UIENet: A Multi-Scale Feature Extraction Network for Marine Underwater Image Enhancement.
- Author
-
Zhao, Shengya, Mei, Xinkui, Ye, Xiufen, and Guo, Shuxiang
- Subjects
IMAGE intensifiers ,LIGHT absorption ,OPTICAL images ,LIGHT scattering ,PYRAMIDS ,IMAGE enhancement (Imaging systems) - Abstract
Underwater optical images have outstanding advantages for short-range underwater target detection tasks. However, owing to the limitations of special underwater imaging environments, underwater images often have several problems, such as noise interference, blur texture, low contrast, and color distortion. Marine underwater image enhancement addresses degraded underwater image quality caused by light absorption and scattering. This study introduces MSFE-UIENet, a high-performance network designed to improve image feature extraction, resulting in deep-learning-based underwater image enhancement, addressing the limitations of single convolution and upsampling/downsampling techniques. This network is designed to enhance the image quality in underwater settings by employing an encoder–decoder architecture. In response to the underwhelming enhancement performance caused by the conventional networks' sole downsampling method, this study introduces a pyramid downsampling module that captures more intricate image features through multi-scale downsampling. Additionally, to augment the feature extraction capabilities of the network, an advanced feature extraction module was proposed to capture detailed information from underwater images. Furthermore, to optimize the network's gradient flow, forward and backward branches were introduced to accelerate its convergence rate and improve stability. Experimental validation using underwater image datasets indicated that the proposed network effectively enhances underwater image quality, effectively preserving image details and noise suppression across various underwater environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. An effectual underwater image enhancement framework using adaptive trans-resunet ++ with attention mechanism.
- Author
-
P, Ajanya and Meera, S.
- Subjects
- *
IMAGE intensifiers , *DEEP learning , *GORILLA (Genus) , *HAZE , *IMAGE enhancement (Imaging systems) , *ALGORITHMS - Abstract
The intricacy of the underwater setting makes it difficult for optical lenses to capture clear underwater photos without haze and colour distortion. Some studies use domain adaptation and transfer learning to address this issue, they aim to reduce the latent mismatch between composition and real-world data, making the space of latent data difficult to read and impractical to control. The background light is a crucial component of the decaying paradigm that directly impacts how well images are enhanced. Thus, to improve the quality of the images over the underwater, new deep-learning techniques are being designed in this paper. Here, the Adaptive Trans-ResUnet++ with Attention Mechanism-based model performs the real-time underwater image enhancement process. In addition, a novel Random Enhanced Artificial Gorilla Troops Optimizer algorithm model is used for optimising the parameters over the given model to further enhance the given model’s performance. A diverse quantitative and qualitative validation is also carried out to learn the enhancement of underwater image quality. The enhanced underwater image may be also useful in the underwater object detection process. Thus, the enhanced images obtained from the developed model are compared with the existing techniques to confirm the efficacy of the suggested underwater image enhancement process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Underwater image enhancement algorithm based on color correction and contrast enhancement.
- Author
-
Xue, Qianqian, Hu, Hongping, Bai, Yanping, Cheng, Rong, Wang, Peng, and Song, Na
- Subjects
- *
IMAGE intensifiers , *WATER waves , *ALGORITHMS , *WAVELET transforms , *COLOR - Abstract
Due to the complex underwater environment and the selective absorption and scattering effect of water on light waves, underwater images often suffer from issues such as low contrast, color distortion, and blurred details. This paper presents a stable and effective algorithm for enhancing underwater images to address these challenges. Firstly, an improved color correction algorithm based on the gray world and minimum information loss is employed to remove the blue-green bias present in the images. Secondly, a contrast enhancement algorithm is based on the guided filter and wavelet decomposition to make the texture details of the image clearer. Then, the normalized weight map of the image is obtained to carry out multi-scale fusion. Finally, the fused image is applied to perform the multi-scale decomposition. The experimental results show that the algorithm proposed in this paper can correct the image color deviation, improve the image contrast and enhance the image details. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Underwater Image Enhancement Fusion Method Guided by Salient Region Detection.
- Author
-
Yang, Jiawei, Huang, Hongwu, Lin, Fanchao, Gao, Xiujing, Jin, Junjie, and Zhang, Biwen
- Subjects
LIGHT absorption ,IMAGE intensifiers ,IMAGE fusion ,OPTICAL properties ,LIGHT scattering - Abstract
Exploring and monitoring underwater environments pose unique challenges due to water's complex optical properties, which significantly impact image quality. Challenges like light absorption and scattering result in color distortion and decreased visibility. Traditional underwater image acquisition methods face these obstacles, highlighting the need for advanced techniques to solve the image color shift and image detail loss caused by the underwater environment in the image enhancement process. This study proposes a salient region-guided underwater image enhancement fusion method to alleviate these problems. First, this study proposes an advanced dark channel prior method to reduce haze effects in underwater images, significantly improving visibility and detail. Subsequently, a comprehensive RGB color correction restores the underwater scene's natural appearance. The innovation of our method is that it fuses through a combination of Laplacian and Gaussian pyramids, guided by salient region coefficients, thus preserving and accentuating the visually significant elements of the underwater environment. Comprehensive subjective and objective evaluations demonstrate our method's superior performance in enhancing contrast, color depth, and overall visual quality compared to existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. 基于像素级水平的通道自适应水下图像增强算法.
- Author
-
彭晏飞, 张添淇, and 安 彤
- Subjects
FEATURE extraction ,DEEP learning ,IMAGE intensifiers ,ALGORITHMS ,SEMANTICS ,PIXELS - Abstract
Copyright of Chinese Journal of Liquid Crystal & Displays is the property of Chinese Journal of Liquid Crystal & Displays and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
41. 基于颜色校正和多尺度融合的水下图像增强.
- Author
-
陶 洋, 武 萍, 刘羽婷, 方文俊, and 周立群
- Subjects
PROBLEM solving ,IMAGE intensifiers ,ALGORITHMS ,COLOR - Abstract
Copyright of Chinese Journal of Liquid Crystal & Displays is the property of Chinese Journal of Liquid Crystal & Displays and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
42. Redefining Accuracy: Underwater Depth Estimation for Irregular Illumination Scenes.
- Author
-
Liu, Tong, Zhang, Sainan, and Yu, Zhibin
- Subjects
- *
IMAGE intensifiers , *LIGHTING , *UNDERWATER navigation , *ENVIRONMENTAL monitoring , *SUBMERSIBLES , *MONOCULARS - Abstract
Acquiring underwater depth maps is essential as they provide indispensable three-dimensional spatial information for visualizing the underwater environment. These depth maps serve various purposes, including underwater navigation, environmental monitoring, and resource exploration. While most of the current depth estimation methods can work well in ideal underwater environments with homogeneous illumination, few consider the risk caused by irregular illumination, which is common in practical underwater environments. On the one hand, underwater environments with low-light conditions can reduce image contrast. The reduction brings challenges to depth estimation models in accurately differentiating among objects. On the other hand, overexposure caused by reflection or artificial illumination can degrade the textures of underwater objects, which is crucial to geometric constraints between frames. To address the above issues, we propose an underwater self-supervised monocular depth estimation network integrating image enhancement and auxiliary depth information. We use the Monte Carlo image enhancement module (MC-IEM) to tackle the inherent uncertainty in low-light underwater images through probabilistic estimation. When pixel values are enhanced, object recognition becomes more accessible, allowing for a more precise acquisition of distance information and thus resulting in more accurate depth estimation. Next, we extract additional geometric features through transfer learning, infusing prior knowledge from a supervised large-scale model into a self-supervised depth estimation network to refine loss functions and a depth network to address the overexposure issue. We conduct experiments with two public datasets, which exhibited superior performance compared to existing approaches in underwater depth estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Reconstructing the Colors of Underwater Images Based on the Color Mapping Strategy.
- Author
-
Wu, Siyuan, Sun, Bangyong, Yang, Xiao, Han, Wenjia, Tan, Jiahai, and Gao, Xiaomei
- Subjects
- *
IMAGE enhancement (Imaging systems) , *CONVOLUTIONAL neural networks , *ACHROMATISM , *LIGHT scattering , *IMAGE intensifiers , *VISIBLE spectra - Abstract
Underwater imagery plays a vital role in ocean development and conservation efforts. However, underwater images often suffer from chromatic aberration and low contrast due to the attenuation and scattering of visible light in the complex medium of water. To address these issues, we propose an underwater image enhancement network called CM-Net, which utilizes color mapping techniques to remove noise and restore the natural brightness and colors of underwater images. Specifically, CM-Net consists of a three-step solution: adaptive color mapping (ACM), local enhancement (LE), and global generation (GG). Inspired by the principles of color gamut mapping, the ACM enhances the network's adaptive response to regions with severe color attenuation. ACM enables the correction of the blue-green cast in underwater images by combining color constancy theory with the power of convolutional neural networks. To account for inconsistent attenuation in different channels and spatial regions, we designed a multi-head reinforcement module (MHR) in the LE step. The MHR enhances the network's attention to channels and spatial regions with more pronounced attenuation, further improving contrast and saturation. Compared to the best candidate models on the EUVP and UIEB datasets, CM-Net improves PSNR by 18.1% and 6.5% and SSIM by 5.9% and 13.3%, respectively. At the same time, CIEDE2000 decreased by 25.6% and 1.3%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Underwater Image Enhancement Based on Light Field-Guided Rendering Network.
- Author
-
Yeh, Chia-Hung, Lai, Yu-Wei, Lin, Yu-Yang, Chen, Mei-Juan, and Wang, Chua-Chin
- Subjects
CONVOLUTIONAL neural networks ,UNDERWATER exploration ,IMAGE intensifiers ,IMAGE reconstruction ,IMPERFECTION ,IMAGE enhancement (Imaging systems) - Abstract
Underwater images often encounter challenges such as attenuation, color distortion, and noise caused by artificial lighting sources. These imperfections not only degrade image quality but also impose constraints on related application tasks. Improving underwater image quality is crucial for underwater activities. However, obtaining clear underwater images has been a challenge, because scattering and blur hinder the rendering of true underwater colors, affecting the accuracy of underwater exploration. Therefore, this paper proposes a new deep network model for single underwater image enhancement. More specifically, our framework includes a light field module (LFM) and sketch module, aiming at the generation of a light field map of the target image for improving the color representation and preserving the details of the original image by providing contour information. The restored underwater image is gradually enhanced, guided by the light field map. The experimental results show the better image restoration effectiveness, both quantitatively and qualitatively, of the proposed method with a lower (or comparable) computing cost, compared with the state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. NUAM-Net: A Novel Underwater Image Enhancement Attention Mechanism Network.
- Author
-
Wen, Zhang, Zhao, Yikang, Gao, Feng, Su, Hao, Rao, Yuan, and Dong, Junyu
- Subjects
UNDERWATER exploration ,COLOR space ,COLOR vision ,IMAGE intensifiers ,ATTENUATION of light - Abstract
Vision-based underwater exploration is crucial for marine research. However, the degradation of underwater images due to light attenuation and scattering poses a significant challenge. This results in the poor visual quality of underwater images and impedes the development of vision-based underwater exploration systems. Recent popular learning-based Underwater Image Enhancement (UIE) methods address this challenge by training enhancement networks with annotated image pairs, where the label image is manually selected from the reference images of existing UIE methods since the groundtruth of underwater images do not exist. Nevertheless, these methods encounter uncertainty issues stemming from ambiguous multiple-candidate references. Moreover, they often suffer from local perception and color perception limitations, which hinder the effective mitigation of wide-range underwater degradation. This paper proposes a novel NUAM-Net (Novel Underwater Image Enhancement Attention Mechanism Network) that addresses these limitations. NUAM-Net leverages a probabilistic training framework, measuring enhancement uncertainty to learn the UIE mapping from a set of ambiguous reference images. By extracting features from both the RGB and LAB color spaces, our method fully exploits the fine-grained color degradation clues of underwater images. Additionally, we enhance underwater feature extraction by incorporating a novel Adaptive Underwater Image Enhancement Module (AUEM) that incorporates both local and long-range receptive fields. Experimental results on the well-known UIEBD benchmark demonstrate that our method significantly outperforms popular UIE methods in terms of PSNR while maintaining a favorable Mean Opinion Score. The ablation study also validates the effectiveness of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. GFRENet: An Efficient Network for Underwater Image Enhancement with Gated Linear Units and Fast Fourier Convolution.
- Author
-
Zhang, Bingxian, Fang, Jiahao, Li, Yujie, Wang, Yue, Zhou, Qinglong, and Wang, Xing
- Subjects
IMAGE intensifiers ,DEEP learning ,LIGHT absorption ,IMAGE enhancement (Imaging systems) - Abstract
Underwater image enhancement is critical for a variety of marine applications such as exploration, navigation, and biological research. However, underwater images often suffer from quality degradation due to factors such as light absorption, scattering, and color distortion. Although current deep learning methods have achieved better performance, it is difficult to balance the enhancement performance and computational efficiency in practical applications, and some methods tend to cause performance degradation on high-resolution large-size input images. To alleviate the above points, this paper proposes an efficient network GFRENet for underwater image enhancement utilizing gated linear units (GLUs) and fast Fourier convolution (FFC). GLUs help to selectively retain the most relevant features, thus improving the overall enhancement performance. FFC enables efficient and robust frequency domain processing to effectively address the unique challenges posed by the underwater environment. Extensive experiments on benchmark datasets show that our approach significantly outperforms existing state-of-the-art techniques in both qualitative and quantitative metrics. The proposed network provides a promising solution for real-time underwater image enhancement, making it suitable for practical deployment in various underwater applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. MEvo-GAN: A Multi-Scale Evolutionary Generative Adversarial Network for Underwater Image Enhancement.
- Author
-
Fu, Feiran, Liu, Peng, Shao, Zhen, Xu, Jing, and Fang, Ming
- Subjects
GENERATIVE adversarial networks ,GENETIC algorithms ,IMAGE intensifiers ,LIGHT absorption ,SIGNAL-to-noise ratio - Abstract
Inunderwater imaging, achieving high-quality imagery is essential but challenging due to factors such as wavelength-dependent absorption and complex lighting dynamics. This paper introduces MEvo-GAN, a novel methodology designed to address these challenges by combining generative adversarial networks with genetic algorithms. The key innovation lies in the integration of genetic algorithm principles with multi-scale generator and discriminator structures in Generative Adversarial Networks (GANs). This approach enhances image details and structural integrity while significantly improving training stability. This combination enables more effective exploration and optimization of the solution space, leading to reduced oscillation, mitigated mode collapse, and smoother convergence to high-quality generative outcomes. By analyzing various public datasets in a quantitative and qualitative manner, the results confirm the effectiveness of MEvo-GAN in improving the clarity, color fidelity, and detail accuracy of underwater images. The results of the experiments on the UIEB dataset are remarkable, with MEvo-GAN attaining a Peak Signal-to-Noise Ratio (PSNR) of 21.2758, Structural Similarity Index (SSIM) of 0.8662, and Underwater Color Image Quality Evaluation (UCIQE) of 0.6597. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Deep Dynamic Weights for Underwater Image Restoration.
- Author
-
Awan, Hafiz Shakeel Ahmad and Mahmood, Muhammad Tariq
- Subjects
CONVOLUTIONAL neural networks ,STANDARD deviations ,IMAGE intensifiers ,ATTENUATION of light ,DEEP learning - Abstract
Underwater imaging presents unique challenges, notably color distortions and reduced contrast due to light attenuation and scattering. Most underwater image enhancement methods first use linear transformations for color compensation and then enhance the image. We observed that linear transformation for color compensation is not suitable for certain images. For such images, non-linear mapping is a better choice. This paper introduces a unique underwater image restoration approach leveraging a streamlined convolutional neural network (CNN) for dynamic weight learning for linear and non-linear mapping. In the first phase, a classifier is applied that classifies the input images as Type I or Type II. In the second phase, we use the Deep Line Model (DLM) for Type-I images and the Deep Curve Model (DCM) for Type-II images. For mapping an input image to an output image, the DLM creatively combines color compensation and contrast adjustment in a single step and uses deep lines for transformation, whereas the DCM employs higher-order curves. Both models utilize lightweight neural networks that learn per-pixel dynamic weights based on the input image's characteristics. Comprehensive evaluations on benchmark datasets using metrics like peak signal-to-noise ratio (PSNR) and root mean square error (RMSE) affirm our method's effectiveness in accurately restoring underwater images, outperforming existing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. An Underwater Image Enhancement Method Based on Diffusion Model Using Dual-Layer Attention Mechanism.
- Author
-
Zhang, Hong, He, Ran, and Fang, Wei
- Subjects
IMAGE intensifiers ,INTERPOLATION - Abstract
Diffusion models have been increasingly utilized in various image-processing tasks, such as segmentation, denoising, and enhancement. These models also show exceptional performance in enhancing underwater images. However, conventional models for underwater image enhancement often face the challenge of simultaneously improving color restoration and super-resolution. This paper introduces a dual-layer attention mechanism that integrates spatial and channel attention to enhance color restoration, while preserving critical image features. Additionally, specific scale factors and interpolation methods are employed during the upsampling process to increase resolution. The proposed DL-UW method achieves significant enhancements in color, illumination, and resolution for low-quality underwater images, resulting in high PSNR, SSIM, and UIQM values. The model demonstrates a robust performance on different datasets, confirming its broad applicability and effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Attention-based color consistency underwater image enhancement network.
- Author
-
Chang, Baocai, Li, Jinjiang, Wang, Haiyang, and Li, Mengjun
- Abstract
Underwater images often exhibit color deviation, reduced contrast, distortion, and other issues due to light refraction, scattering, and absorption. Therefore, restoring detailed information in underwater images and obtaining high-quality results are primary objectives in underwater image enhancement tasks. Recently, deep learning-based methods have shown promising results, but handling details in low-light underwater image processing remains challenging. In this paper, we propose an attention-based color consistency underwater image enhancement network. The method consists of three components: illumination detail network, balance stretch module, and prediction learning module. The illumination detail network is responsible for generating the texture structure and detail information of the image. We introduce a novel color restoration module to better match color and content feature information, maintaining color consistency. The balance stretch module compensates using pixel mean and maximum values, adaptively adjusting color distribution. Finally, the prediction learning module facilitates context feature interaction to obtain a reliable and effective underwater enhancement model. Experiments conducted on three real underwater datasets demonstrate that our approach produces more natural enhanced images, performing well compared to state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.