377 results on '"Underwater vision"'
Search Results
2. An appraisal of backscatter removal and refraction calibration models for improving the performance of vision-based mapping and navigation in shallow underwater environments
- Author
-
Muhammad, Fickrie, Poerbandono, Sternberg, Harald, Djunarsjah, Eka, and Abidin, Hasanuddin Z
- Published
- 2025
- Full Text
- View/download PDF
3. Artificial Intelligence for Automated Marine Growth Segmentation
- Author
-
Carvalho, João, Leite, Pedro Nuno, Mina, João, Pinho, Lourenço, Gonçalves, Eduardo P., Pinto, Andry Maykol, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Marques, Lino, editor, Santos, Cristina, editor, Lima, José Luís, editor, Tardioli, Danilo, editor, and Ferre, Manuel, editor
- Published
- 2024
- Full Text
- View/download PDF
4. An AUV Tracking Algorithm Based on the Scale-Adaptive Kernel Correlation Filter
- Author
-
Lu, Sheng, Han, Bo, Xu, Hongli, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Qu, Yi, editor, Gu, Mancang, editor, Niu, Yifeng, editor, and Fu, Wenxing, editor
- Published
- 2024
- Full Text
- View/download PDF
5. Review of Visual Control Technology for Undersea Vehicles
- Author
-
Jian GAO, Yaozhen HE, Yimin CHEN, Yuanxu ZHANG, Xubo YANG, Yufeng LI, and Zhenchi ZHANG
- Subjects
undersea vehicle ,underwater vision ,visual control ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 - Abstract
Visual control is a control method that utilizes visual information for environmental and self-state awareness. In this paper, this technology was applied to control undersea vehicles, and relevant research progress, challenges, and trends in different application scenarios were analyzed. The current development and task scenarios of visual control technology for undersea vehicles were first introduced, mainly focusing on underwater image enhancement, target recognition, and pose estimation technologies. The current development of visual control technology for undersea vehicles was then summarized and analyzed based on three task scenarios: underwater visual dynamic positioning and target tracking, undersea vehicle docking, and underwater operational tasks such as target grasping. Finally, the challenges and development trends of visual control technology for undersea vehicles were outlined.
- Published
- 2024
- Full Text
- View/download PDF
6. 基于视觉伺服的水下机器人导引技术.
- Author
-
寇邺郡 and 李想
- Abstract
Copyright of Journal of Ordnance Equipment Engineering is the property of Chongqing University of Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
7. Underwater Object Detection in Marine Ranching Based on Improved YOLOv8.
- Author
-
Jia, Rong, Lv, Bin, Chen, Jie, Liu, Hailin, Cao, Lin, and Liu, Min
- Subjects
OBJECT recognition (Computer vision) ,RANCHING ,FEATURE extraction ,DATA augmentation ,MARICULTURE - Abstract
The aquaculture of marine ranching is of great significance for scientific aquaculture and the practice of statistically grasping existing information on the types of living marine resources and their density. However, underwater environments are complex, and there are many small and overlapping targets for marine organisms, which seriously affects the performance of detectors. To overcome these issues, we attempted to improve the YOLOv8 detector. The InceptionNeXt block was used in the backbone to enhance the feature extraction capabilities of the network. Subsequently, a separate and enhanced attention module (SEAM) was added to the neck to enhance the detection of overlapping targets. Moreover, the normalized Wasserstein distance (NWD) loss was proportionally added to the original CIoU loss to improve the detection of small targets. Data augmentation methods were used to improve the dataset during training to enhance the robustness of the network. The experimental results showed that the improved YOLOv8 achieved the mAP of 84.5%, which was an improvement over the original YOLOv8 of approximately 6.2%. Meanwhile, there were no significant increases in the numbers of parameters and computations. This detector can be applied on platforms for seafloor observation experiments in the field of marine ranching to complete the task of real-time detection of marine organisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Design, Implementation, and Evaluation of an External Pose-Tracking System for Underwater Cameras
- Author
-
Winkel, Birger, Nakath, David, Woelk, Felix, and Köser, Kevin
- Published
- 2024
- Full Text
- View/download PDF
9. Underwater Acoustic Image Processing for Detection of Marine Debris
- Author
-
Naik, Vritika Vijaylal, Ansari, Sadaf, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Sanyal, Goutam, editor, Travieso-González, Carlos M., editor, Awasthi, Shashank, editor, Pinto, Carla M. A., editor, and Purushothama, B. R., editor
- Published
- 2022
- Full Text
- View/download PDF
10. Underwater vision enhancement based on GAN with dehazing evaluation.
- Author
-
Yu, Haifeng, Li, Xinbin, Feng, Yankai, and Han, Song
- Subjects
GENERATIVE adversarial networks ,COGNITIVE processing speed ,LIGHT scattering ,VISION ,LIGHT absorption - Abstract
Underwater vision faces the problem of visual degradation and haze caused by absorption and scattering of light. Therefore, underwater vision enhancement need consider both dehazing and processing speed in a changing underwater environment. In this paper, a generative adversarial network with dehazing evaluation (GAN-DE) is proposed to realize underwater vision enhancement. The comprehensive training datasets which include synthetic underwater image training datasets and underwater image training datasets are proposed to train the networks. Besides, the structure of generator is designed as combined Unet and Variety of View Network (VoVnet). In particular, the color-line loss function is designed based on color-line model which is used to describe the degree of haze. The discriminator includes color-line loss and adversarial loss so as to simultaneously remove haze while preserving the underwater image content. Qualitative, quantitative, and color-accuracy analyses based on experimental results show the superiority of GAN-DE over current state-of-the-art methods. More importantly, a video enhancement experiments conducted on the seabed and obtained satisfactory results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Underwater Object Detection in Marine Ranching Based on Improved YOLOv8
- Author
-
Rong Jia, Bin Lv, Jie Chen, Hailin Liu, Lin Cao, and Min Liu
- Subjects
underwater vision ,seafloor observation ,object detection ,deep learning ,YOLO ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 ,Oceanography ,GC1-1581 - Abstract
The aquaculture of marine ranching is of great significance for scientific aquaculture and the practice of statistically grasping existing information on the types of living marine resources and their density. However, underwater environments are complex, and there are many small and overlapping targets for marine organisms, which seriously affects the performance of detectors. To overcome these issues, we attempted to improve the YOLOv8 detector. The InceptionNeXt block was used in the backbone to enhance the feature extraction capabilities of the network. Subsequently, a separate and enhanced attention module (SEAM) was added to the neck to enhance the detection of overlapping targets. Moreover, the normalized Wasserstein distance (NWD) loss was proportionally added to the original CIoU loss to improve the detection of small targets. Data augmentation methods were used to improve the dataset during training to enhance the robustness of the network. The experimental results showed that the improved YOLOv8 achieved the mAP of 84.5%, which was an improvement over the original YOLOv8 of approximately 6.2%. Meanwhile, there were no significant increases in the numbers of parameters and computations. This detector can be applied on platforms for seafloor observation experiments in the field of marine ranching to complete the task of real-time detection of marine organisms.
- Published
- 2023
- Full Text
- View/download PDF
12. Wavelength-based Attributed Deep Neural Network for Underwater Image Restoration.
- Author
-
SHARMA, PRASEN, BISHT, IRA, and SUR, ARIJIT
- Subjects
IMAGE reconstruction ,ATTENUATION of light ,DEEP learning ,IMAGE segmentation ,COST functions - Abstract
Background: Underwater images, in general, suffer from low contrast and high color distortions due to the non-uniform attenuation of the light as it propagates through the water. In addition, the degree of attenuation varies with the wavelength, resulting in the asymmetric traversing of colors. Despite the prolific works for underwater image restoration (UIR) using deep learning, the above asymmetricity has not been addressed in the respective network engineering. Contributions: As the first novelty, this article shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain for the task of UIR. Further, it is important to suppress the irrelevant multi-contextual features and increase the representational power of the model. Therefore, as a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features. The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions. An extensive set of experiments have been carried out to show the efficacy of the proposed scheme over existing best-published literature on benchmark datasets. More importantly, we have demonstrated a comprehensive validation of enhanced images across various high-level vision tasks, e.g., underwater image semantic segmentation and diver’s 2D pose estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. MedUCC: Medium-Driven Underwater Camera Calibration for Refractive 3-D Reconstruction.
- Author
-
Gu, Changjun, Cong, Yang, Sun, Gan, Gao, Yajun, Tang, Xu, Zhang, Tao, and Fan, Baojie
- Subjects
- *
UNDERWATER cameras , *CAMERA calibration , *CALIBRATION , *ROBOT vision - Abstract
Underwater camera calibration has attracted much attentions due to its significance in high-precision three-dimensional (3-D) pose estimation and scene reconstruction. However, most existing calibration methods focus on calibrating the underwater camera in a single scenario [e.g., air-glass-water], which can not well formulate the geometry constraint and further result in the complex calibration process. Moreover, the calibration precision of these methods is low, since multilayer transparent refractions with unknown layer orientation and distance make the task more difficult than that in air. To address these challenges, we develop a novel and efficient medium-driven method for underwater camera calibration (MedUCC), which can calibrate the underwater camera parameters, including the orientation and position of the transparent glass accurately. Our key idea of this article is to leverage the light-path changes formed by medium refractions between different media to acquire calibration data, which can better formulate the geometry constraint, and estimate the initial value of the underwater camera parameters. To improve the calibration accuracy of the underwater camera system, a quaternion-based solution is developed to refine the underwater camera parameters. To the end, we evaluate the calibration performance on an underwater camera system. Extensive experiment results demonstrate that our proposed method can obtain a better performance in comparison to the existing works. We also validate our proposed MedUCC method on our designed 3-D scanner prototype, which illustrates the superiority of our proposed calibration method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Marine bubble flow quantification using wide-baseline stereo photogrammetry.
- Author
-
She, Mengkun, Weiß, Tim, Song, Yifan, Urban, Peter, Greinert, Jens, and Köser, Kevin
- Subjects
- *
STEREOSCOPIC cameras , *NATURAL gas , *PHOTOGRAMMETRY , *ECHO sounders , *WATER depth , *ECHO sounding , *ATMOSPHERE , *DISSOLVED air flotation (Water purification) - Abstract
Reliable quantification of natural and anthropogenic gas release (e.g. CO 2 , methane) from the seafloor into the water column, and potentially to the atmosphere, is a challenging task. While ship-based echo sounders such as single beam and multibeam systems allow detection of free gas, bubbles, in the water even from a great distance, exact quantification utilizing the hydroacoustic data requires additional parameters such as rise speed and bubble size distribution. Optical methods are complementary in the sense that they can provide high temporal and spatial resolution of single bubbles or bubble streams from close distance. In this contribution we introduce a complete instrument and evaluation method for optical bubble stream characterization targeted at flows of up to 100 ml/min and bubbles with a few millimeters radius. The dedicated instrument employs a high-speed deep sea capable stereo camera system that can record terabytes of bubble imagery when deployed at a seep site for later automated analysis. Bubble characteristics can be obtained for short sequences, then relocating the instrument to other locations, or in autonomous mode of definable intervals up to several days, in order to capture bubble flow variations due to e.g. tide dependent pressure changes or reservoir depletion. Beside reporting the steps to make bubble characterization robust and autonomous, we carefully evaluate the reachable accuracy to be in the range of 1–2% of the bubble radius and propose a novel auto-calibration procedure that, due to the lack of point correspondences, uses only the silhouettes of bubbles. The system has been operated successfully in 1000 m water depth at the Cascadia margin offshore Oregon to assess methane fluxes from various seep locations. Besides sample results we also report failure cases and lessons learnt during deployment and method development. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. DeepRecog: Threefold underwater image deblurring and object recognition framework for AUV vision systems.
- Author
-
Pranav, M. V., Shreyas Madhav, A. V., and Meena, Janaki
- Subjects
- *
CONVOLUTIONAL neural networks , *UNDERWATER exploration , *AUTONOMOUS underwater vehicles , *SUBMERSIBLES , *OBJECT recognition (Computer vision) , *MARINE biology , *VISION , *ADAPTIVE filters - Abstract
Underwater explorations and probes have now become frequent for marine discovery and endangered resources protection. The decrease in natural light with an increase in water depth and the characteristic of the medium to absorb and scatter light pose crucial challenges to underwater vision systems. Autonomous Underwater Vehicles (AUVs) depend upon their imaging systems for navigation and environmental resource exploration. This paper proposes DeepRecog—an integrated underwater image deblurring and object recognition framework for AUV vision systems. The principle behind the image deblurring module involves a threefold approach consisting of CNNs, adaptive and transformative filters. The ensemble object detection and recognition module identifies marine life and other frequently existent underwater assets from AUV images and achieves mean Average Precision (mAP) of 0.95 and was found to be 6.42% more precise than YOLOv3, 8.43% more than FasterRCNN + VGG16 and 15.78% more than FasterRCNN. This framework was created with the purpose of providing real-time detection and recognition with minimal delay. The system can also be employed for former images acquired from AUVs and hopes to facilitate efficient solutions for marine image post-processing. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Development of Robust Recogintion Algorithm of Retro-reflective Marker Based on Visual Odometry for Underwater Environment
- Author
-
Youn, Pillip, Jung, Kwangyik, Myung, Hyun, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Kim, Jong-Hwan, editor, Myung, Hyun, editor, Kim, Junmo, editor, Xu, Weiliang, editor, Matson, Eric T, editor, Jung, Jin-Woo, editor, and Choi, Han-Lim, editor
- Published
- 2019
- Full Text
- View/download PDF
17. The Synthesis of Unpaired Underwater Images for Monocular Underwater Depth Prediction
- Author
-
Qi Zhao, Ziqiang Zheng, Huimin Zeng, Zhibin Yu, Haiyong Zheng, and Bing Zheng
- Subjects
underwater vision ,underwater depth map estimation ,underwater image translation ,generative adversarial network ,image-to-image translation ,Science ,General. Including nature conservation, geographical distribution ,QH1-199.5 - Abstract
Underwater depth prediction plays an important role in underwater vision research. Because of the complex underwater environment, it is extremely difficult and expensive to obtain underwater datasets with reliable depth annotation. Thus, underwater depth map estimation with a data-driven manner is still a challenging task. To tackle this problem, we propose an end-to-end system including two different modules for underwater image synthesis and underwater depth map estimation, respectively. The former module aims to translate the hazy in-air RGB-D images to multi-style realistic synthetic underwater images while retaining the objects and the structural information of the input images. Then we construct a semi-real RGB-D underwater dataset using the synthesized underwater images and the original corresponding depth maps. We conduct supervised learning to perform depth estimation through the pseudo paired underwater RGB-D images. Comprehensive experiments have demonstrated that the proposed method can generate multiple realistic underwater images with high fidelity, which can be applied to enhance the performance of monocular underwater image depth estimation. Furthermore, the trained depth estimation model can be applied to real underwater image depth map estimation. We will release our codes and experimental setting in https://github.com/ZHAOQIII/UW_depth.
- Published
- 2021
- Full Text
- View/download PDF
18. A Survey of Underwater Human-Robot Interaction (U-HRI)
- Author
-
Birk, Andreas
- Published
- 2022
- Full Text
- View/download PDF
19. Robust Underwater Fish Classification Based on Data Augmentation by Adding Noises in Random Local Regions
- Author
-
Wei, Guanqun, Wei, Zhiqiang, Huang, Lei, Nie, Jie, Chang, Huanhuan, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Hong, Richang, editor, Cheng, Wen-Huang, editor, Yamasaki, Toshihiko, editor, Wang, Meng, editor, and Ngo, Chong-Wah, editor
- Published
- 2018
- Full Text
- View/download PDF
20. Three Birds, One Stone: Unified Laser-Based 3-D Reconstruction Across Different Media.
- Author
-
Gu, Changjun, Cong, Yang, and Sun, Gan
- Subjects
- *
OPTICAL scanners , *POSE estimation (Computer vision) , *CAMERA calibration , *UNDERWATER cameras , *DETECTORS , *IMAGE reconstruction , *SCANNING systems - Abstract
The laser-based 3-D reconstruction is crucial for autonomous manipulation and resource exploration in both the air and underwater scenarios, due to its accurate precision and robustness to disturbances. However, most current laser-based 3-D reconstruction sensors cannot be applied across different media without recalibration directly, e.g., air, air $\rightarrow $ glass $\rightarrow $ air, and air $\rightarrow $ glass $\rightarrow $ water, which is called killing three birds with one stone. This is because the variation of the medium density could change the sensor calibration parameters and further make the sensor suffer from a systematic geometric bias. To address these challenges, a unified laser-based 3-D reconstruction method is proposed in order that the laser-based scanner could be used across different media without recalibration, where we first explicitly model the refraction of the underwater vision systems and transform it across different media into a unified sensor reference frame. More specifically, an underwater refractive camera calibration model is designed to calibrate the orientation and position of the refractive interface, which can improve the accuracy of underwater reconstruction for the laser-based scanner, and we then present a refractive pose estimation model with a unified sensor reference frame, which can help the sensor to be applied across different scenarios directly. For the experiments, we validate the performance of our method on our underwater 3-D scanner prototype. Several reconstruction results on different objects and scenarios demonstrate the effectiveness of our proposed method and the practicality of our designed sensor. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Information Communication Technology (ICT) Tools for Preservation of Underwater Environment: A Vision-Based Posidonia Oceanica Monitoring.
- Author
-
Ruscio, Francesco, Peralta, Giovanni, Pollini, Lorenzo, and Costanzi, Riccardo
- Subjects
POSIDONIA oceanica ,INFORMATION & communication technologies ,VISUAL odometry ,SCUBA divers ,ACOUSTIC localization - Abstract
Underwater monitoring activities are crucial for the preservation of marine ecosystems. Currently, scuba divers are involved in data collection campaigns that are repetitive, dangerous, and expensive. This article describes the application of Information Communication Technology (ICT) tools to underwater visual data for monitoring purposes. The data refer to a Posidonia oceanica survey mission carried out by a scuba diver using a Smart Dive Scooter equipped with visual acquisition and acoustic localization systems. An acoustic-based strategy for geo-referencing of the optical dataset is reported. It exploits the synchronization between the audio track extracted from a camera and the transponder pings adopted for the acoustic positioning. The positioning measurements are employed within an extended Kalman filter to estimate the diver’s path during the mission. A visual odometry algorithm is implemented within the filter to refine the navigation state estimation of the diver with respect to the acoustic positioning only. Moreover, a smoothing step based on the Rauch-Tung-Striebel smoother is applied to further improve the estimated diver’s positions. Finally, the article reports the results of two different data processing for monitoring applications. The first one is an image mosaicking obtained by concatenating subsequent frames, whereas the second one refers to a qualitative distribution of the Posidonia oceanica over the mission area accomplished through an image segmentation process. The two outcomes are plotted over a satellite image of the surveyed area, showing that the proposed process is an effective tool capable of facilitating divers in their monitoring and inspection activities. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Video-Based Real Time Analysis of Plankton Particle Size Spectrum
- Author
-
Jia Yu, Xuewen Yang, Nan Wang, Gavin Tilstone, Elaine Fileman, Haiyong Zheng, Zhibin Yu, Min Fu, and Bing Zheng
- Subjects
Underwater vision ,clarity index ,screening ,segmentation ,ROI ,particle size spectrum ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Plankton is one of the most basic components in the marine ecosystem. The community structure and population change of plankton are the important ecological information to reflect the environmental situation. As the fundamental parameter of the plankton community structure, size spectrum is very useful for the evaluation of the marine ecosystem. In this paper, we propose a real-time and adaptive algorithm to calculate the size spectrum of underwater plankton video, which is captured by the high-resolution and high-speed optical camera. First, this algorithm screens the high-resolution plankton images to ensure that every plankton is counted once with the clearest frame. Second, edge detection and morphological methods are performed to get plankton areas. Furthermore, we perform several simplifications that each particle is handled as ellipses shape to calculate the volume to obtain the size spectrum. Moreover, in order to facilitate the biologists to research plankton deeply, we record a region of the clear area containing each plankton to build a plankton database.
- Published
- 2019
- Full Text
- View/download PDF
23. Omnidirectional Multicamera Video Stitching Using Depth Maps.
- Author
-
Bosch, Josep, Istenio, Klemen, Gracias, Nuno, Garcia, Rafael, and Ridao, Pere
- Subjects
COMPUTER vision ,ROBOT vision ,MAPS - Abstract
Omnidirectional vision has recently captured plenty of attention within the computer vision community. The popularity of cameras able to capture 360 $^{\circ }$ has increased in the last few years. A significant number of these cameras are composed of multiple individual cameras that capture images or videos, which are stitched together at a later postprocess stage. Stitching strategies have the complex objective of seamlessly joining the images, so that the viewer has the feeling the panorama was captured from a single location. Conventional approaches either assume that the world is a simple sphere around the camera, which leads to visible misalignments on the final panoramas, or use feature-based stitching techniques that do not exploit the rigidity of multicamera systems. In this paper, we propose a new stitching pipeline based on state-of-the-art techniques for both online and offline applications. The goal is to stitch the images taking profit of the available information on the multicamera system and the environment. Exploiting the spatial information of the scene helps to achieve significantly better results. While for the online case, sparse data can be obtained from a simultaneous localization and mapping process, for the offline case, it is estimated from a 3-D reconstruction of the scene. The information available is represented in depth maps, which provide all information in a condensed form and allow easy representation of complex shapes. The new pipelines proposed for both online and offline cases are compared, visually and numerically, against conventional approaches, using a real data set. The data set was collected in a challenging underwater scene with a custom-designed multicamera system. The results obtained surpass those of conventional approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Vision-Based In Situ Monitoring of Plankton Size Spectra Via a Convolutional Neural Network.
- Author
-
Wang, Nan, Yu, Jia, Yang, Biao, Zheng, Haiyong, and Zheng, Bing
- Subjects
ARTIFICIAL neural networks ,PLANKTON ,NEUROMYELITIS optica - Abstract
Plankton size spectra monitoring is crucial for managing and conserving aquatic ecosystems. Thus, we develop an in situ size spectra monitoring system to obtain the size spectra of plankton and the information of their living status underwater. The system consists of an imaging unit and an information processing unit. The imaging part applies a darkfield illumination to enhance the image contrast. Three lenses with different magnifications are alternated by a motor automatically to capture sizes of plankton from $3\,\mu$ m to $3\,\text{mm}$. Moreover, the system can analyze the captured images in real time using the proposed multitask size spectra convolutional neural network, obtaining size spectra and density distribution of plankton simultaneously. Field test confirms that our system performs well both in imaging and information processing. Furthermore, the system can provide the living behavior of plankton, thereby helping biologists to study the aquatic ecosystem effectively and precisely. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. Underwater Image Dehazing via Unpaired Image-to-image Translation.
- Author
-
Cho, Younggun, Jang, Hyesu, Malav, Ramavtar, Pandey, Gaurav, and Kim, Ayoung
- Abstract
Underwater imaging has long been focused on dehazing and color correction to address severe degradation in the water medium. In this paper, we propose a learning-based image restoration method that uses Generative Adversarial Networks (GAN). For network generality and learning flexibility, we constituted unpaired image translation frameworks into image restoration. The proposed method utilizes multiple cyclic consistency losses that capture image characteristics and details of underwater images. To prepare unpaired images of clean and degraded scenes, we collected images from Flickr and filtered out false images using image characteristics. For validation, we extensively evaluated the proposed network on simulated and real underwater hazy images. Also, we tested our method on conventional computer vision algorithms, such as the level of edges and feature matching results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. An Overview of Underwater Vision Enhancement: From Traditional Methods to Recent Deep Learning
- Author
-
Kai Hu, Chenghang Weng, Yanwen Zhang, Junlan Jin, and Qingfeng Xia
- Subjects
underwater vision ,video/image enhancement ,deep learning ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 ,Oceanography ,GC1-1581 - Abstract
Underwater video images, as the primary carriers of underwater information, play a vital role in human exploration and development of the ocean. Due to the optical characteristics of water bodies, underwater video images generally have problems such as color bias and unclear image quality, and image quality degradation is severe. Degenerated images have adverse effects on the visual tasks of underwater vehicles, such as recognition and detection. Therefore, it is vital to obtain high-quality underwater video images. Firstly, this paper analyzes the imaging principle of underwater images and the reasons for their decline in quality and briefly classifies various existing methods. Secondly, it focuses on the current popular deep learning technology in underwater image enhancement, and the underwater video enhancement technologies are also mentioned. It also introduces some standard underwater data sets, common video image evaluation indexes and underwater image specific indexes. Finally, this paper discusses possible future developments in this area.
- Published
- 2022
- Full Text
- View/download PDF
27. MODELING UNDERWATER VISUAL AND FILTER FEEDING BY PLANKTIVOROUS SHEARWATERS IN UNUSUAL SEA CONDITIONS
- Author
-
Lovvorn, James R, Baduini, Cheryl L, and Hunt, George L
- Subjects
coccolithophore blooms ,diving birds ,euphausiids ,filter-feeding ,foraging models ,krill ,light attenuation ,planktivores ,Puffinus tenuirostris ,Short-tailed Shearwater ,underwater vision ,visual foraging ,Ecological Applications ,Ecology ,Evolutionary Biology - Abstract
Short-tailed Shearwaters (Puffinus tenuirostris) migrate between breeding areas in Australia and wintering areas in the Bering Sea. These extreme movements allow them to feed on swarms of euphausiids (krill) that occur seasonally in different regions, but they occasionally experience die-offs when availability of euphausiids or other prey is inadequate. During a coccolithophore bloom in the Bering Sea in 1997, hundreds of thousands of Short-tailed Shearwaters starved to death. One proposed explanation was that the calcareous shells of phytoplanktonic coccolithophores reduced light transmission, thus impairing visual foraging underwater. This hypothesis assumes that shearwaters feed entirely by vision (bite-feeding), but their unique bill and tongue morphology might allow nonvisual filter-feeding within euphausiid swarms. To investigate these issues, we developed simulation models of Short-tailed Shearwaters bite-feeding and filter-feeding underwater on the euphausiid Thysanoessa raschii. The visual (bite-feeding) model considered profiles of diffuse and beam attenuation of light in the Bering Sea among seasons, sites, and years with varying influence by diatom and coccolithophore blooms. The visual model indicated that over the huge range of densities in euphausiid swarms (tens to tens of thousands per cubic meter), neither light level nor prey density had appreciable effects on intake rate; instead, intake was severely limited by capture time and capture probability after prey were detected. Thus, for shearwaters there are strong advantages of feeding on dense swarms near the surface, where dive costs are low relative to fixed intake rate, and intake might be increased by filter-feeding. With minimal effects of light conditions, starvation of shearwaters during the coccolithophore bloom probably did not result from reduced visibility underwater after prey patches were found. Alternatively; turbidity from coccolithophores might have hindered detection of euphausiid swarms from the air.
- Published
- 2001
28. Geostatistics for Context-Aware Image Classification
- Author
-
Codevilla, Felipe, Botelho, Silvia S. C., Duarte, Nelson, Purkis, Samuel, Shihavuddin, A. S. M., Garcia, Rafael, Gracias, Nuno, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Nalpantidis, Lazaros, editor, Krüger, Volker, editor, Eklundh, Jan-Olof, editor, and Gasteratos, Antonios, editor
- Published
- 2015
- Full Text
- View/download PDF
29. Towards Real-Time Advancement of Underwater Visual Quality With GAN.
- Author
-
Chen, Xingyu, Yu, Junzhi, Kong, Shihan, Wu, Zhengxing, Fang, Xi, and Wen, Li
- Subjects
- *
UNDERWATER noise , *SOURCE code , *IMAGE reconstruction , *COST functions , *GALLIUM nitride , *OCEAN bottom - Abstract
Low visual quality has prevented underwater robotic vision from a wide range of applications. Although several algorithms have been developed, real time and adaptive methods are deficient for real-world tasks. In this paper, we address this difficulty based on generative adversarial networks (GAN), and propose a GAN-based restoration scheme (GAN-RS). In particular, we develop a multibranch discriminator including an adversarial branch and a critic branch for the purpose of simultaneously preserving image content and removing underwater noise. In addition to adversarial learning, a novel dark channel prior loss also promotes the generator to produce realistic vision. More specifically, an underwater index is investigated to describe underwater properties, and a loss function based on the underwater index is designed to train the critic branch for underwater noise suppression. Through extensive comparisons on visual quality and feature restoration, we confirm the superiority of the proposed approach. Consequently, the GAN-RS can adaptively improve underwater visual quality in real time and induce an overall superior restoration performance. Finally, a real-world experiment is conducted on the seabed for grasping marine products, and the results are quite promising. The source code is publicly available 1 [Online]. Available: https://github.com/SeanChenxy/GAN_RS.. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. The extended marine underwater environment database and baseline evaluations.
- Author
-
Jian, Muwei, Qi, Qiang, Yu, Hui, Dong, Junyu, Cui, Chaoran, Nie, Xiushan, Zhang, Huaxiang, Yin, Yilong, and Lam, Kin-Man
- Abstract
Images captured in underwater environments usually exhibit complex illuminations, severe turbidity of water, and often display objects with large varieties in pose and spatial location, etc., which cause challenges to underwater vision research. In this paper, an extended underwater image database for salient-object detection or saliency detection is introduced. This database is called the Marine Underwater Environment Database (MUED), which contains 8600 underwater images of 430 individual groups of conspicuous objects with complex backgrounds, multiple salient objects, and complicated variations in pose, spatial location, illumination, turbidity of water, etc. The publicly available MUED provides researchers in relevant industrial and academic fields with underwater images under different types of variations. Manually labeled ground-truth information is also included in the database, so as to facilitate the research on more applicable and robust methods for both underwater image processing and underwater computer vision. The scale, accuracy, diversity, and background structure of MUED cannot only be widely used to assess and evaluate the performance of the state-of-the-art salient-object detection and saliency-detection algorithms for general images, but also particularly benefit the development of underwater vision technology and offer unparalleled opportunities to researchers in the underwater vision community and beyond. • A diverse underwater image database is constructed and presented. • This released benchmark can identify the strengths and weaknesses of the existing algorithms for underwater images. • This database can offer unparalleled opportunities to researchers in the underwater vision and beyond. • This benchmark will benefit the development of underwater vision technology in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Reef Encounter.
- Author
-
Douglas, Kate
- Subjects
- *
CORAL reef fishes , *COLOR of fish , *UNDERWATER vision , *PHOTOBIOLOGY - Abstract
Focuses on the visual systems of reef fish. Effect of water depth on the appearance of long-wavelength light; Why most nocturnal reef fish are red in color; Claim that fish have only two types of photoreceptors for color; Efforts of Justin Marshall of the Vision, Touch and Hearing Research Center, to study the vision and color of fish in their natural context.
- Published
- 2000
32. Sampling uncertainties of particle size distributions and derived fluxes
- Author
-
Emmanuel Boss, Rainer Kiko, Marc Picheral, Lionel Guidi, David A. Siegel, Kelsey Bisson, and B. B. Cael
- Subjects
Underwater vision ,Particle-size distribution ,Environmental science ,Sampling (statistics) ,Ocean Engineering ,Soil science ,Particle size - Abstract
In this study, we provide a method to quantify the uncertainty associated with sampling particle size distributions (PSD), using a global compilation of Underwater Vision Profiler observations (UVP, version 5). The UVP provides abundant in situ data of the marine PSD on global scales and has been used for a diversity of applications, but the uncertainty associated with its measurements has not been quantified, including how this uncertainty propagates into derived products of interest. We model UVP sampling uncertainty using Bayesian Poisson statistics and provide formulae for the uncertainty associated with a given sampling volume and observed particle count. We also model PSD observations using a truncated power law to better match the low concentration associated with rare large particles as seen by the UVP. We use the two shape parameters from this statistical model to describe changes in the PSD shape across latitude band, season, and depth. The UVP sampling uncertainty propagates into an uncertainty for modeled carbon flux exceeding 50%. The statistical model is used to extend the size interval used in a PSD-derived carbon flux model, revealing a high sensitivity of the PSD-derived flux model to the inclusion of small particles (80–128 μm). We provide avenues to address additional uncertainties associated with UVP-derived carbon flux calculations.
- Published
- 2022
- Full Text
- View/download PDF
33. The Role of the Effect of Shadowing of Some Wavy Regions by Other Wavy Regions in the Formation of an Image of Snell's Window.
- Author
-
Molkov, A. A.
- Subjects
- *
IMAGE processing , *COMPUTER simulation , *RESERVOIRS , *UNDERWATER vision , *PARAMETER estimation - Abstract
The shadowing of some regions of a wavy water surface by other regions can significantly influence the backscattered signal not only in the problem of remote monitoring of a water reservoir at grazing angles (e.g., in radar), but also in the problems of underwater vision. In this work, we present the results of a theoretical study of this phenomenon with respect to the model of underwater imaging of the sky (Snell's window) near the image boundary. The numerical-simulation method is used to develop the statistically mean image of Snell's window, which allows for the shadowing effects, and the corrected formula of the estimate of a wave parameter, namely, the water-surface slope variance, is obtained using the value of blurring of the boundary of Snell's window. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
34. DESIGN AND CONTROL OF UNDERWATER HYBRID VEHICLE CAPABLE OF PERFORMING NUMEROUS TASKS.
- Author
-
Shahani, K., Wu, C., Persaud, R., and Song, H.
- Subjects
- *
HYBRID electric vehicle design & construction , *AUTONOMOUS underwater vehicles , *REMOTE control , *IMAGE sensors , *REMOTE submersibles - Abstract
This paper is about Ocean Strider; an Underwater Hybrid Vehicle that is modified with hybrid (manual and autonomous) control system. The aims concerning this Underwater Hybrid Vehicle are to be competent to operate underwater by using remote control via operator and seek out the user interested objects. Autonomously control system visually follows and manages a secured position comparable to a motionless target and avoids the hindrances for reliable navigation by using vision and ultrasonic sensor. Vision is a fundamental root that promotes the underwater robot to execute various tasks autonomously. Ocean Strider is an intelligent vehicle to explicitly identify, locate and respond perfectly by specifying from distinct color codes and dimension of the objects. Multiple experiments have been conducted in the laboratory to understand manual and autonomous control system which includes grasping, location and traveling pattern with respect to objects. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
35. Eyes Wide Shut: the impact of dim‐light vision on neural investment in marine teleosts.
- Author
-
Iglesias, Teresa L., Dornburg, Alex, Warren, Dan L., Wainwright, Peter C., Schmitz, Lars, and Economo, Evan P.
- Subjects
- *
BRAIN evolution , *UNDERWATER vision , *OSTEICHTHYES , *CIRCADIAN rhythms , *VERTEBRATE phylogeny , *FISHES - Abstract
Abstract: Understanding how organismal design evolves in response to environmental challenges is a central goal of evolutionary biology. In particular, assessing the extent to which environmental requirements drive general design features among distantly related groups is a major research question. The visual system is a critical sensory apparatus that evolves in response to changing light regimes. In vertebrates, the optic tectum is the primary visual processing centre of the brain and yet it is unclear how or whether this structure evolves while lineages adapt to changes in photic environment. On one hand, dim‐light adaptation is associated with larger eyes and enhanced light‐gathering power that could require larger information processing capacity. On the other hand, dim‐light vision may evolve to maximize light sensitivity at the cost of acuity and colour sensitivity, which could require less processing power. Here, we use X‐ray microtomography and phylogenetic comparative methods to examine the relationships between diel activity pattern, optic morphology, trophic guild and investment in the optic tectum across the largest radiation of vertebrates—teleost fishes. We find that despite driving the evolution of larger eyes, enhancement of the capacity for dim‐light vision generally is accompanied by a decrease in investment in the optic tectum. These findings underscore the importance of considering diel activity patterns in comparative studies and demonstrate how vision plays a role in brain evolution, illuminating common design principles of the vertebrate visual system. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. Single-shot underwater image restoration: A visual quality-aware method based on light propagation model.
- Author
-
Barros, Wagner, Nascimento, Erickson R., Barbosa, Walysson V., and Campos, Mario F.M.
- Subjects
- *
IMAGE reconstruction , *UNDERWATER imaging systems , *LIGHT propagation , *ATTENUATION of light , *BACKSCATTERING - Abstract
Highlights • A new approach to restore images acquired from underwater scenes. • The approach is fully automatic and requires a single image. • Visual-quality-driven restoration. • Rough depth map estimation. • Scene parameters extraction based on a non-linear optimization. In this paper, we present a novel method to restore the visual quality of images from scenes immersed in participating media, in particular water. Our method builds upon existing physics-based model and estimates the scene radiance by removing the medium interference on light propagation. Our approach requires a single image as input and, by combining a physics-based model for light propagation and a set of quality metrics, reduces the artifacts and degradation imposed by the attenuation, forward scattering, and backscattering effects. We show that the resulting images produced by our technique from underwater images are amenable to be directly used as input to algorithms which do not assume disturbances from the media. Our experiments demonstrate that, as far as visual image quality is concerned, our methodology outperforms both traditional image based restoration approaches and the state-of-the-art methods. Our approach brings advantages regarding descriptor distinctiveness which enables the use of underwater images in legacy non-participating media algorithms such as keypoint detection and description. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. Autonomous Underwater Intervention: Experimental Results of the MARIS Project.
- Author
-
Simetti, Enrico, Wanderlingh, Francesco, Torelli, Sandro, Bibuli, Marco, Odetti, Angelo, Bruzzone, Gabriele, Rizzini, Dario Lodi, Aleotti, Jacopo, Palli, Gianluca, Moriello, Lorenzo, and Scarcia, Umberto
- Subjects
AUTONOMOUS underwater vehicles ,MANIPULATORS (Machinery) ,MECHATRONICS ,UNDERWATER vision ,REMOTE submersibles - Abstract
Autonomous underwater vehicles are frequently used for survey missions and monitoring tasks, however, manipulation and intervention tasks are still largely performed with a human in the loop. Employing autonomous vehicles for these tasks has received a growing interest in the last ten years, and few pioneering projects have been funded on this topic. Among these projects, the Italian MARIS project had the goal of developing technologies and methodologies for the use of autonomous underwater vehicle manipulator systems in underwater manipulation and transportation tasks. This work presents the developed control framework, the mechatronic integration, and the project's final experimental results on floating underwater intervention. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Analysis and Compensation of Geometric Distortions, Appearing when Observing Objects under Water.
- Author
-
Konovalenko, I. A., Sidorchuk, D. S., and Zenkin, G. M.
- Abstract
In the paper, the analytical description of visual geometric distortions appearing when observing objects under water is performed. The problem of finding a virtual image of a point underwater light source is solved. Three results are derived from the solution of this problem. The first is an equation for a set of virtual images of a point light source, which is formed under all possible positions of the observer. The second is an equation for the observed image of an underwater plane under a stationary observer. The third is the transformation of coordinates simulating the underwater distortion of an optical system, and the inverse transformation, allowing the compensation of the underwater distortion without an underwater calibration procedure. The experimental confirmation of the derived laws is given. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. Underwater Vision-Based Gesture Recognition: A Robustness Validation for Safe Human–Robot Interaction
- Author
-
Davide Chiarella, Arturo Gomez Chavez, Andrea Ranieri, and Andreas Birk
- Subjects
Underwater vision ,business.industry ,Computer science ,underwater human-robot interaction ,deep learning ,Robotics ,gesture-based language ,Underwater robotics ,Human–robot interaction ,Computer Science Applications ,Gesture recognition ,Unexploded ordnance ,Control and Systems Engineering ,Systems engineering ,Robot ,Artificial intelligence ,Electrical and Electronic Engineering ,Underwater ,business ,data augmentation - Abstract
Underwater robotics requires very reliable and safe operations. This holds especially for missions in cooperation with divers who are - despite the significant advancements of marine robotics in recent years - still essential for many underwater operations. Possible application cases of underwater human-robot collaboration include marine science, archeology, oil and gas production (OGP), handling of unexploded ordnance (UXO), e.g., from WWII ammunition dumped in the seas, or inspection and maintenance of marine infrastructure like pipelines, harbors, or renewable energy installations - to name just a few examples. We present a fully integrated approach to Underwater Human Robot Interaction (U-HRI) in form of a front-end for gesture recognition combined with a back-end with a full language interpreter. The gesture-based language is derived from the existing standard gestures for communication between human divers. It enables a diver to issue single commands as well as complex mission specifications to an Autonomous Underwater Vehicle (AUV) as demonstrated in several field trials. The gesture recognition is an essential component of the overall approach. It requires high reliability under the challenging conditions of the underwater domain. There is especially a high amount of variation in visual data due to various effects in the underwater image formation. We hence investigate in this article different Machine Learning (ML) methods for robust diver gesture recognition. This includes a classical ML approach and four state-of-the-art Deep Learning (DL) methods. Furthermore, we introduce a physically realistic way to use range information for adding underwater haze to produce meaningful additional data from existing real-world data. This can be of interest for creating evaluation data for underwater perception in general or to produce additional training data for ML-based approaches.
- Published
- 2021
- Full Text
- View/download PDF
40. Information Communication Technology (ICT) Tools for Preservation of Underwater Environment: A Vision-Based Posidonia Oceanica Monitoring
- Author
-
Riccardo Costanzi, Giovanni Peralta, Lorenzo Pollini, and Francesco Ruscio
- Subjects
Underwater data geo-referencing ,Vision based ,biology ,Multimedia ,Computer science ,Underwater vision ,Posidonia oceanica monitoring ,Visual odometry ,Ocean Engineering ,Oceanography ,biology.organism_classification ,computer.software_genre ,Information and Communications Technology ,Posidonia oceanica ,Ict tools ,Underwater ,computer - Abstract
Underwater monitoring activities are crucial for the preservation of marine ecosystems. Currently, scuba divers are involved in data collection campaigns that are repetitive, dangerous, and expensive. This article describes the application of Information Communication Technology (ICT) tools to underwater visual data for monitoring purposes. The data refer to a Posidonia Oceanica survey mission carried out by a scuba diver using a Smart Dive Scooter equipped with visual acquisition and acoustic localization systems. An acoustic-based strategy for geo-referencing of the optical dataset is reported. It exploits the synchronization between the audio track extracted from a camera and the transponder pings adopted for the acoustic positioning. The positioning measurements are employed within an extended Kalman filter to estimate the diver's path during the mission. A visual odometry algorithm is implemented within the filter to refine the navigation state estimation of the diver with respect to the acoustic positioning only. Moreover, a smoothing step based on the Rauch-Tung-Striebel smoother is applied to further improve the estimated diver's positions. Finally, the article reports the results of two different data processing for monitoring applications. The first one is an image mosaicking obtained by concatenating subsequent frames, whereas the second one refers to a qualitative distribution of the Posidonia Oceanica over the mission area accomplished through an image segmentation process. The two outcomes are plotted over a satellite image of the surveyed area, showing that the proposed process is an effective tool capable of facilitating divers in their monitoring and inspection activities.
- Published
- 2021
- Full Text
- View/download PDF
41. The Snell’s Window Image for Remote Sensing of the Upper Sea Layer: Results of Practical Application
- Author
-
Alexander A. Molkov and Lev S. Dolin
- Subjects
underwater vision ,Snell’s window image ,inherent optical properties ,slope variance ,remote sensing ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 ,Oceanography ,GC1-1581 - Abstract
Estimation of water optical properties can be performed by photo or video registration of rough sea surface from underwater at an angle of total internal reflection in the away from the sun direction at several depths. In this case, the key characteristic of the obtained image will be the border of the Snell’s window, which is a randomly distorted image of the sky. Its distortion changes simultaneously under the action of the sea roughness and light scattering; however, after correct “decoding” of this image, their separate determination is possible. This paper presents the corresponding algorithms for achieving these possibilities by the Snell’s window images. These images were obtained in waters with different optical properties and wave conditions under several types of illumination. Practical guidelines for recording, processing and analyzing images of the Snell’s window are also formulated.
- Published
- 2019
- Full Text
- View/download PDF
42. Diazotrophic Trichodesmium influence on ocean color and pigment composition in the South West tropical Pacific.
- Author
-
Dupouy, Cécile, Frouin, Robert, Tedetti, Marc, Maillard, Morgane, Rodier, Martine, Lombard, Fabien, Guidi, Lionel, Picheral, Marc, Duhamel, Solange, Charrière, Bruno, and Sempéré, Richard
- Subjects
CYANOBACTERIA ,MARINE ecology ,EUPHOTIC zone ,OCEAN color ,UNDERWATER vision - Abstract
We assessed the influence of the marine diazotrophic cyanobacterium Trichodesmium on the bio-optical properties of South West tropical Pacific waters (18-22° S, 160° E-160° W) during the February-March 2015 OUTPACE cruise. We performed measurements of backscattering and absorption coefficients, irradiance, and radiance, in the euphotic zone, and took Underwater Vision Profiler 5 (UPV5) pictures for counting the largest Trichodesmium spp. colonies. Pigment concentrations were determined by fluorimetry and by high performance liquid chromatography and picoplankton abundance by flow cytometry. Trichome concentration was estimated from pigment algorithms and validated by surface visual counts. In result, the large colonies were well correlated to the trichome concentration estimates (though with a large factor of 600 to 900, due to aggregation processes). Large Trichodesmium abundance was always associated with particulate absorption at a peak of mycosporine-like amino acid absorption, and high particulate backscattering, but not with high fluorescence, high chlorophyll-a concentration, or blue particulate absorption in the water column. Along the West to East transect, Trichodesmium together with Prochlorococcus represented the major part of total chlorophyll and the other groups were negligible. Trichodesmium contribution to chlorophyll was the highest in the Melanesian Archipelago around New Caledonia and Vanuatu, progressively decreased to the vicinity of the Fiji Islands, and reached a minimum in the South Pacific gyre where the contribution of Prochlorococcus was maximum. At the frontal LDB, Trichodesmium and Prochlorococcus has almost same contributions. The relationship between normalized water-leaving radiance, in the ultraviolet and visible domains, nL
w , and chlorophyll was generally similar to that found in the Eastern tropical at BIOSOPE. Principal component analysis (PCA) of OUTPACE data showed that nLw were strongly correlated to chlorophyll except in the green and yellow domains. These results, as well as differences in the PCA of BIOSOPE data, suggested that nLw variability in the green and yellow during OUTPACE was influenced by other variables, associated with Trichodesmium presence as the backscattering coefficient, phycoerythrin fluorescence, and/or zeaxanthin absorption. Trichodesmium detection should then involve examination of nLw at the green and yellow wavelengths. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
43. Marine particles in the Gulf of Alaska shelf system: Spatial patterns and size distributions from in situ optics.
- Author
-
Turner, Jessica S., Pretty, Jessica L., and McDonnell, Andrew M.P.
- Subjects
- *
SUBMARINE geology , *UNDERWATER vision , *PARTICLE size distribution , *GLACIERS , *RUNOFF - Abstract
The Gulf of Alaska is a biologically productive ocean region surrounded by coastal mountains with high seasonal runoff from rivers and glaciers. In this complex environment, we measured the concentrations and size distributions of 2.5 µm–27 mm marine particles using the Laser in situ Scattering and Transmissometry device (LISST-DEEP) and the Underwater Vision Profiler 5 (UVP) during summer 2015. We analyzed the spatial distribution of particles across a wide range of size classes to determine the probable drivers. Spatially, total particle concentrations surpassed 1000 µl/l nearshore in the northeasternmost entrances and in the outflow of Cook Inlet, as well as offshore past the shelf break. These dual maxima suggest high lithogenic inputs nearshore and high biological production at and beyond the shelf break. Most large particles (> 0.5 mm) imaged by the UVP were detrital aggregates. In nearshore surface waters near river inputs, size distributions revealed small size classes (< 100 µm) to be most influential. At the shelf break, size distributions revealed a dual peak in both small (< 100 µm) and very large (> 2 mm) size classes. This study highlights the importance of lithogenic inputs from a mountainous margin to the coastal ocean and their potential to enhance sinking of biological material produced at the shelf break. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. Underwater image enhancement via extended multi-scale Retinex.
- Author
-
Zhang, Shu, Wang, Ting, Dong, Junyu, and Yu, Hui
- Subjects
- *
UNDERWATER vision , *IMAGE enhancement (Imaging systems) , *MULTISCALE modeling , *COMPUTER vision , *IMAGE analysis , *ABSORPTION - Abstract
Underwater exploration has become an active research area over the past few decades. The image enhancement is one of the challenges for those computer vision based underwater researches because of the degradation of the images in the underwater environment. The scattering and absorption are the main causes in the underwater environment to make the images decrease their visibility, for example, blurry, low contrast, and reducing visual ranges. To tackle aforementioned problems, this paper presents a novel method for underwater image enhancement inspired by the Retinex framework, which simulates the human visual system. The term Retinex is created by the combinations of “Retina” and “Cortex”. The proposed method, namely LAB-MSR, is achieved by modifying the original Retinex algorithm. It utilizes the combination of the bilateral filter and trilateral filter on the three channels of the image in CIELAB color space according to the characteristics of each channel. With real world data, experiments are carried out to demonstrate both the degradation characteristics of the underwater images in different turbidities, and the competitive performance of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. Estimation of underwater visibility in coastal and inland waters using remote sensing data.
- Author
-
Kulshreshtha, Anuj and Shanmugam, Palanisamy
- Subjects
UNDERWATER vision ,UNDERWATER physiology ,TERRITORIAL waters ,CHLOROPHYLL content of seawater ,SUSPENDED sediments - Abstract
An optical method is developed to estimate water transparency (or underwater visibility) in terms of Secchi depth ( Z ), which follows the remote sensing and contrast transmittance theory. The major factors governing the variation in Z , namely, turbidity and length attenuation coefficient (1/( c + K ), c = beam attenuation coefficient; K = diffuse attenuation coefficient at 531 nm), are obtained based on band rationing techniques. It was found that the band ratio of remote sensing reflectance (expressed as ( R (443) + R (490))/( R (555) + R (670)) contains essential information about the water column optical properties and thereby positively correlates to turbidity. The beam attenuation coefficient ( c) at 531 nm is obtained by a linear relationship with turbidity. To derive the vertical diffuse attenuation coefficient ( K ) at 531 nm, K (490) is estimated as a function of reflectance ratio ( R (670)/ R (490)), which provides the bio-optical link between chlorophyll concentration and K (531). The present algorithm was applied to MODIS-Aqua images, and the results were evaluated by matchup comparisons between the remotely estimated Z and in situ Z in coastal waters off Point Calimere and its adjoining regions on the southeast coast of India. The results showed the pattern of increasing Z from shallow turbid waters to deep clear waters. The statistical evaluation of the results showed that the percent mean relative error between the MODIS-Aqua-derived Z and in situ Z values was within ±25%. A close agreement achieved in spatial contours of MODIS-Aqua-derived Z and in situ Z for the month of January 2014 and August 2013 promises the model capability to yield accurate estimates of Z in coastal, estuarine, and inland waters. The spatial contours have been included to provide the best data visualization of the measured, modeled (in situ), and satellite-derived Z products. The modeled and satellite-derived Z values were compared with measurement data which yielded RMSE = 0.079, MRE = −0.016, and R = 0.95 for the modeled Z and RMSE = 0.075, MRE = 0.020, and R = 0.95 for the satellite-derived Z products. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
46. The Pinax-model for accurate and efficient refraction correction of underwater cameras in flat-pane housings.
- Author
-
Łuczyński, Tomasz, Pfingsthorn, Max, and Birk, Andreas
- Subjects
- *
UNDERWATER cameras , *REFRACTION (Optics) , *PINHOLE cameras , *SALINITY , *CALIBRATION , *OCEAN engineering - Abstract
The calibration and refraction correction process for underwater cameras with flat-pane interfaces is presented that is very easy and convenient to use in real world applications while yielding very accurate results. The correction is derived from an analysis of the axial camera model for underwater cameras, which is among others computationally hard to tackle. It is shown how realistic constraints on the distance of the camera to the window can be exploited, which leads to an approach dubbed Pinax Model as it combines aspects of a virtual pinhole model with the projection function from the axial camera model. It allows the pre-computation of a lookup-table for very fast refraction correction of the flat-pane with high accuracy. The model takes the refraction indices of water into account, especially with respect to salinity, and it is therefore sufficient to calibrate the underwater camera only once in air. It is demonstrated by real world experiments with several underwater cameras in different salt and sweet water conditions that the proposed process outperforms standard methods. Among others, it is shown how the presented method leads to accurate results with single in-air calibration and even with just estimated salinity values. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
47. Three Birds, One Stone: Unified Laser-Based 3-D Reconstruction Across Different Media
- Author
-
Yang Cong, Gan Sun, and Changjun Gu
- Subjects
Underwater vision ,business.industry ,Computer science ,Orientation (computer vision) ,020208 electrical & electronic engineering ,02 engineering and technology ,Iterative reconstruction ,Laser ,Refraction ,law.invention ,law ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Underwater ,business ,Instrumentation ,Pose ,Reference frame ,Camera resectioning - Abstract
The laser-based 3-D reconstruction is crucial for autonomous manipulation and resource exploration in both the air and underwater scenarios, due to its accurate precision and robustness to disturbances. However, most current laser-based 3-D reconstruction sensors cannot be applied across different media without recalibration directly, e.g., air, air $\rightarrow $ glass $\rightarrow $ air, and air $\rightarrow $ glass $\rightarrow $ water, which is called killing three birds with one stone. This is because the variation of the medium density could change the sensor calibration parameters and further make the sensor suffer from a systematic geometric bias. To address these challenges, a unified laser-based 3-D reconstruction method is proposed in order that the laser-based scanner could be used across different media without recalibration, where we first explicitly model the refraction of the underwater vision systems and transform it across different media into a unified sensor reference frame. More specifically, an underwater refractive camera calibration model is designed to calibrate the orientation and position of the refractive interface, which can improve the accuracy of underwater reconstruction for the laser-based scanner, and we then present a refractive pose estimation model with a unified sensor reference frame, which can help the sensor to be applied across different scenarios directly. For the experiments, we validate the performance of our method on our underwater 3-D scanner prototype. Several reconstruction results on different objects and scenarios demonstrate the effectiveness of our proposed method and the practicality of our designed sensor.
- Published
- 2021
- Full Text
- View/download PDF
48. Polarimetric Calculation Method of Global Pixel for Underwater Image Restoration
- Author
-
Jie Chen, Gao Jun, Jin Haihong, LiJin Qian, and Fan Zhiguo
- Subjects
lcsh:Applied optics. Photonics ,Computer science ,Underwater vision ,Polarimetry ,02 engineering and technology ,01 natural sciences ,010309 optics ,the degree of polarization of the target ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:QC350-467 ,Computer vision ,Electrical and Electronic Engineering ,Underwater ,Image restoration ,Physics::Atmospheric and Oceanic Physics ,Pixel ,business.industry ,Polarimetric imaging ,lcsh:TA1501-1820 ,polarimetric calculation of global pixel ,Polarization (waves) ,Atomic and Molecular Physics, and Optics ,Computer Science::Computer Vision and Pattern Recognition ,Radiance ,Degree of polarization ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,underwater image restoration ,lcsh:Optics. Light - Abstract
The estimation of the degree of polarization of backscattered light and target light is the key to realize scattering removal in underwater polarimetric imaging. In this paper, a new polarization scattering removal method is proposed by realizing polarimetric calculation of the target light with each pixel of the scene automatically, which is conducive to the restoration of the details of underwater scene. The global model of pixel level is established by utilizing the gradient prior of the polarization information of the total radiance image. Different from the previous study that the degree of polarization of the target light is a constant, the global calculation method can solve the global variable image of the degree of polarization of the target light and it is used for reconstruction experiments of underwater real-world scene to verify the effectiveness of the algorithm. The experimental results demonstrate that the global pixel calculation method has obvious effect on image detail recovery of the underwater scene to realize the clear underwater vision. The influence of the backscattered light on underwater imaging is well suppressed and image contrast has been significantly improved, which provides a new method for enhancing performance of underwater polarimetric imaging with better vision.
- Published
- 2021
49. Two-Branch Deep Neural Network for Underwater Image Enhancement in HSV Color Space
- Author
-
Runmin Cong, Wei Gao, Feng Shao, Junkang Hu, and Qiuping Jiang
- Subjects
Artificial neural network ,Computer science ,Underwater vision ,business.industry ,Applied Mathematics ,media_common.quotation_subject ,HSL and HSV ,Convolutional neural network ,RGB color space ,Signal Processing ,Chrominance ,Contrast (vision) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Underwater ,business ,media_common - Abstract
Due to the influence of light absorption and scattering, underwater images usually suffer from quality deteriorations such as color cast and reduced contrast. The diverse quality degradations not only dissatisfy the user expectation but also lead to a significant performance drop in many underwater vision applications. This letter proposes a novel two-branch deep neural network for underwater image enhancement (UIE), which is capable of separately removing color cast and enhancing image contrast by fully leveraging useful properties of the HSV color space in disentangling chrominance and intensity. Specifically, the input underwater image is first converted into the HSV color space and disentangled into HS and V channels to serve as the input of the two branches, respectively. Then, the color cast removal branch enhances the H and S channels with a generative adversarial network architecture while the contrast enhancement branch enhances the V channel via a traditional convolutional neural network. The enhanced channels by the two branches are merged and converted back into RGB color space to obtain the final enhanced result. Experimental results demonstrate that, compared with state-of-the-art UIE methods, our method can produce much more visually pleasing enhanced results.
- Published
- 2021
- Full Text
- View/download PDF
50. Deep Joint Depth Estimation and Color Correction From Monocular Underwater Images Based on Unsupervised Adaptation Networks
- Author
-
Zhi-Hui Wang, Xinchen Ye, Rui Xu, Xin Fan, Haojie Li, Baoli Sun, and Zheng Li
- Subjects
Monocular ,Underwater vision ,Computer science ,business.industry ,Machine vision ,Deep learning ,Color correction ,Perspective (graphical) ,02 engineering and technology ,Visualization ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Robot ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Underwater ,business - Abstract
Degraded visibility and geometrical distortion typically make the underwater vision more intractable than open air vision, which impedes the development of underwater-related machine vision and robotic perception. Therefore, this paper addresses the problem of joint underwater depth estimation and color correction from monocular underwater images, which aims at enjoying the mutual benefits between these two related tasks from a multi-task perspective. Our core ideas lie in our new deep learning architecture. Due to the lack of effective underwater training data, and the weak generalization to the real-world underwater images trained on synthetic data, we consider the problem from a novel perspective of style-level and feature-level adaptation, and propose an unsupervised adaptation network to deal with the joint learning problem. Specifically, a style adaptation network (SAN) is first proposed to learn a style-level transformation to adapt in-air images to the style of underwater domain. Then, we formulate a task network (TN) to jointly estimate the scene depth and correct the color from a single underwater image by learning domain-invariant representations. The whole framework can be trained end-to-end in an adversarial learning manner. Extensive experiments are conducted under air-to-water domain adaptation settings. We show that the proposed method performs favorably against state-of-the-art methods in both depth estimation and color correction tasks.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.