31 results
Search Results
2. Sound Source Localization for Unmanned Aerial Vehicles in Low Signal-to-Noise Ratio Environments.
- Author
-
Wu, Sheng, Zheng, Yijing, Ye, Kun, Cao, Hanlin, Zhang, Xuebo, and Sun, Haixin
- Subjects
- *
TIME delay estimation , *MULTIPLE Signal Classification , *ACOUSTIC localization , *SIGNAL-to-noise ratio , *DRONE aircraft , *FOLK music - Abstract
In recent years, with the continuous development and popularization of unmanned aerial vehicle (UAVs) technology, the surge in the number of UAVs has led to an increasingly serious problem of illegal flights. Traditional acoustic-based UAV localization techniques have limited ability to extract short-time and long-time signal features, and have poor localization performance in low signal-to-noise ratio environments. For this reason, in this paper, we propose a deep learning-based UAV localization technique in low signal-to-noise ratio environments. Specifically, on the one hand, we propose a multiple signal classification (MUSIC) pseudo-spectral normalized mean processing technique to improve the direction of arrival (DOA) performance of a traditional broadband MUSIC algorithm. On the other hand, we design a DOA estimation algorithm for UAV sound sources based on a time delay estimation neural network, which solves the problem of limited DOA resolution and the poor performance of traditional time delay estimation algorithms under low signal-to-noise ratio conditions. We verify the feasibility of the proposed method through simulation experiments and experiments in real scenarios. The experimental results show that our proposed method can locate the approximate flight path of a UAV within 20 m in a real scenario with a signal-to-noise ratio of −8 dB. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A Neural Network for Hyperspectral Image Denoising by Combining Spatial–Spectral Information.
- Author
-
Lian, Xiaoying, Yin, Zhonghai, Zhao, Siwei, Li, Dandan, Lv, Shuai, Pang, Boyu, and Sun, Dexin
- Subjects
- *
IMAGE denoising , *SIGNAL-to-noise ratio , *ELECTRONIC data processing - Abstract
Hyperspectral imaging often suffers from various types of noise, including sensor non-uniformity and atmospheric disturbances. Removing multiple types of complex noise in hyperspectral images (HSIs) while preserving high fidelity in spectral dimensions is a challenging task in hyperspectral data processing. Existing methods typically focus on specific types of noise, resulting in limited applicability and an inadequate ability to handle complex noise scenarios. This paper proposes a denoising method based on a network that considers both the spatial structure and spectral differences of noise in an image data cube. The proposed network takes into account the DN value of the current band, as well as the horizontal, vertical, and spectral gradients as inputs. A multi-resolution convolutional module is employed to accurately extract spatial and spectral noise features, which are then aggregated through residual connections at different levels. Finally, the residual mixed noise is approximated. Both simulated and real case studies confirm the effectiveness of the proposed denoising method. In the simulation experiment, the average PSNR value of the denoised results reached 31.47 at a signal-to-noise ratio of 8 dB, and the experimental results on the real data set Indian Pines show that the classification accuracy of the denoised hyperspectral image (HSI) is improved by 16.31% compared to the original noisy version. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Updating of the Archival Large-Scale Soil Map Based on the Multitemporal Spectral Characteristics of the Bare Soil Surface Landsat Scenes.
- Author
-
Rukhovich, Dmitry I., Koroleva, Polina V., Rukhovich, Alexey D., and Komissarov, Mikhail A.
- Subjects
- *
SOIL mapping , *LANDSAT satellites , *SOIL degradation , *DIGITAL soil mapping , *ARABLE land - Abstract
For most of the arable land in Russia (132–137 million ha), the dominant and accurate soil information is stored in the form of map archives on paper without coordinate reference. The last traditional soil map(s) (TSM, TSMs) were created over 30 years ago. Traditional and/or archival soil map(s) (ASM, ASMs) are outdated in terms of storage formats, dates, and methods of production. The technology of constructing a multitemporal soil line (MSL) makes it possible to update ASMs and TSMs based on the processing of big remote-sensing data (RSD). To construct an MSL, the spectral characteristics of the bare soil surface (BSS) are used. The BSS on RSD is distinguished within the framework of the conceptual apparatus of the spectral neighborhood of the soil line. The filtering of big RSD is based on deep machine learning. In the course of the work, a vector georeferenced version of the ASM and an updated soil map were created based on the coefficient "C" of the MSL. The maps were verified based on field surveys (76 soil pits). The updated map is called the map of soil interpretation of the coefficient "C" (SIC "C"). The SIC "C" map has a more detailed legend compared to the ASM (7 sections/chapters instead of 5), greater accuracy (smaller errors of the first and second kind), and potential suitability for calculating soil organic matter/carbon (SOM/SOC) reserves (soil types/areals in the SIC "C" map are statistically significant are divided according to the thickness of the organomineral horizon and the content of SOM in the plowed layer). When updating, a systematic underestimation of the numbers of contours and areas of soils with manifestations of negative/degradation soil processes (slitization and erosion) on the TSM was established. In the process of updating, all three shortcomings of the ASMs/TSMs (archaic storage, dates, and methods of creation) were eliminated. The SIC "C" map is digital (thematic raster), modern, and created based on big data processing methods. For the first time, the actualization of the soil map was carried out based on the MSL characteristics (coefficient "C"). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Soil Moisture Retrieval in Bare Agricultural Areas Using Sentinel-1 Images.
- Author
-
Ettalbi, Mouad, Baghdadi, Nicolas, Garambois, Pierre-André, Bazzi, Hassan, Ferreira, Emmanuel, and Zribi, Mehrez
- Subjects
- *
SOIL moisture , *SYNTHETIC aperture radar , *AGRICULTURE , *SOIL mapping , *WEATHER forecasting , *AGRICULTURAL mapping - Abstract
Soil moisture maps are essential for hydrological, agricultural and risk assessment applications. To best meet these requirements, it is essential to develop soil moisture products at high spatial resolution, which is now made possible using the free Sentinel-1 (S1) SAR (Synthetic Aperture Radar) data. Some soil moisture retrieval techniques using S1 data relied on the use of a priori weather information in order to increase the precision of soil moisture estimates, which required access to a weather-forecasting framework. This paper presents an improved and fully autonomous solution for high-resolution soil moisture mapping in bare agricultural areas. The proposed solution derives a priori weather information directly from the original Sentinel images, thus bypassing the need for a weather forecasting framework. For soil moisture estimation, the neural network technique was implemented to ensure the optimum integration of radar information. The neural networks were trained using synthetic data generated by the modified Integral Equation Model (IEM) model and validated on real data from two study sites in France and Tunisia. The main findings showed that the use of a radar signal averaged over grids of a few km 2 in addition to radar signal at plot scale instead of a priori weather information provides good soil moisture estimations. The accuracy is even slightly better compared to the accuracy obtained using a priori weather information. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Distillation Sparsity Training Algorithm for Accelerating Convolutional Neural Networks in Embedded Systems.
- Author
-
Xiao, Penghao, Xu, Teng, Xiao, Xiayang, Li, Weisong, and Wang, Haipeng
- Subjects
- *
CONVOLUTIONAL neural networks , *AUTOMATIC target recognition , *DISTILLATION , *ALGORITHMS , *NEURAL development - Abstract
The rapid development of neural networks has come at the cost of increased computational complexity. Neural networks are both computationally intensive and memory intensive; as such, the minimal energy and computing power of satellites pose a challenge for automatic target recognition (ATR). Knowledge distillation (KD) can distill knowledge from a cumbersome teacher network to a lightweight student network, transferring the essential information learned by the teacher network. Thus, the concept of KD can be used to improve the accuracy of student networks. Even when learning from a teacher network, there is still redundancy in the student network. Traditional networks fix the structure before training, such that training does not improve the situation. This paper proposes a distillation sparsity training (DST) algorithm based on KD and network pruning to address the above limitations. We first improve the accuracy of the student network through KD, and then through network pruning, allowing the student network to learn which connections are essential. DST allows the teacher network to teach the pruned student network directly. The proposed algorithm was tested on the CIFAR-100, MSTAR, and FUSAR-Ship data sets, with a 50% sparsity setting. First, a new loss function for the teacher-pruned student was proposed, and the pruned student network showed a performance close to that of the teacher network. Second, a new sparsity model (uniformity half-pruning UHP) was designed to solve the problem that unstructured pruning does not facilitate the implementation of general-purpose hardware acceleration and storage. Compared with traditional unstructured pruning, UHP can double the speed of neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. KaRIn Noise Reduction Using a Convolutional Neural Network for the SWOT Ocean Products.
- Author
-
Tréboutte, Anaëlle, Carli, Elisa, Ballarotta, Maxime, Carpentier, Benjamin, Faugère, Yannice, and Dibarboure, Gérald
- Subjects
- *
CONVOLUTIONAL neural networks , *OCEAN surface topography , *STANDARD deviations , *OCEAN , *NOISE control - Abstract
The SWOT (Surface Water Ocean Topography) mission will provide high-resolution and two-dimensional measurements of sea surface height (SSH). However, despite its unprecedented precision, SWOT's Ka-band Radar Interferometer (KaRIn) still exhibits a substantial amount of random noise. In turn, the random noise limits the ability of SWOT to capture the smallest scales of the ocean's topography and its derivatives. In that context, this paper explores the feasibility, strengths and limits of a noise-reduction algorithm based on a convolutional neural network. The model is based on a U-Net architecture and is trained and tested with simulated data from the North Atlantic. Our results are compared to classical smoothing methods: a median filter, a Lanczos kernel smoother and the SWOT de-noising algorithm developed by Gomez-Navarro et al. Our U-Net model yields better results for all the evaluation metrics: 2 mm root mean square error, sub-millimetric bias, variance reduction by factor of 44 (16 dB) and an accurate power spectral density down to 10–20 km wavelengths. We also tested various scenarios to infer the robustness and the stability of the U-Net. The U-Net always exhibits good performance and can be further improved with retraining if necessary. This robustness in simulation is very encouraging: our findings show that the U-Net architecture is likely one of the best candidates to reduce the noise of flight data from KaRIn. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Study on Rapid Inversion of Soil Water Content from Ground-Penetrating Radar Data Based on Deep Learning.
- Author
-
Li, Zhilian, Zeng, Zhaofa, Xiong, Hongqiang, Lu, Qi, An, Baizhou, Yan, Jiahe, Li, Risheng, Xia, Longfei, Wang, Haoyu, and Liu, Kexin
- Subjects
- *
GROUND penetrating radar , *SOIL moisture , *DEEP learning , *EARTH sciences , *SOIL sampling - Abstract
Ground-penetrating radar (GPR) is an efficient and nondestructive geophysical method with great potential for detecting soil water content at the farmland scale. However, a key challenge in soil detection is obtaining soil water content rapidly and in real-time. In recent years, deep learning methods have become more widespread in the earth sciences, making it possible to use them for soil water content inversion from GPR data. In this paper, we propose a neural network framework GPRSW based on deep learning of GPR data. GPRSW is an end-to-end network that directly inverts volumetric soil water content (VSWC) through single-channel GPR data. Synthetic experiments show that GPRSW accurately identifies different VSWC boundaries in the model in time depth. The predicted VSWC and model fit well within 40 ns, with a maximum error after 40 ns of less than 0.10 cm3 × cm−3. To validate our method, we conducted GPR measurements at the experimental field of the Academy of Agricultural Sciences in Gongzhuling City, Jilin Province and applied GPRSW to VSWC measurements. The results show that predicted values of GPRSW match with field soil samples and are consistent with the overall trend of the TDR soil probe samples, with a maximum difference not exceeding 0.03 cm3 × cm−3. Therefore, our study shows that GPRSW has the potential to be applied to obtain soil water content from GPR data on farmland. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Efficient Dual-Branch Bottleneck Networks of Semantic Segmentation Based on CCD Camera.
- Author
-
Li, Jiehao, Dai, Yingpeng, Su, Xiaohang, and Wu, Weibin
- Subjects
- *
MOBILE robots , *COMPUTATIONAL complexity , *SPATIAL resolution , *CCD cameras , *IMAGE segmentation - Abstract
This paper investigates a novel Efficient Dual-branch Bottleneck Network (EDBNet) to perform real-time semantic segmentation tasks on mobile robot systems based on CCD camera. To remedy the non-linear connection between the input and the output, a small-scale and shallow module called the Efficient Dual-branch Bottleneck (EDB) module is established. The EDB unit consists of two branches with different dilation rates, and each branch widens the non-linear layers. This module helps to simultaneously extract local and situational information while maintaining a minimal set of parameters. Moreover, the EDBNet, which is built on the EDB unit, is intended to enhance accuracy, inference speed, and parameter flexibility. It employs dilated convolution with a high dilation rate to increase the receptive field and three downsampling procedures to maintain feature maps with superior spatial resolution. Additionally, the EDBNet uses effective convolutions and compresses the network layer to reduce computational complexity, which is an efficient technique to capture a great deal of information while keeping a rapid computing speed. Finally, using the CamVid and Cityscapes datasets, we obtain Mean Intersection over Union (MIoU) results of 68.58 percent and 71.21 percent, respectively, with just 1.03 million parameters and faster performance on a single GTX 1070Ti card. These results also demonstrate the effectiveness of the practical mobile robot system. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. End-to-End Prediction of Lightning Events from Geostationary Satellite Images.
- Author
-
Brodehl, Sebastian, Müller, Richard, Schömer, Elmar, Spichtinger, Peter, and Wand, Michael
- Subjects
- *
REMOTE-sensing images , *ARTIFICIAL neural networks , *GEOSTATIONARY satellites , *THUNDERSTORMS , *INFRARED imaging , *CONVOLUTIONAL neural networks , *OPTICAL flow - Abstract
While thunderstorms can pose severe risks to property and life, forecasting remains challenging, even at short lead times, as these often arise in meta-stable atmospheric conditions. In this paper, we examine the question of how well we could perform short-term (up to 180 min) forecasts using exclusively multi-spectral satellite images and past lighting events as data. We employ representation learning based on deep convolutional neural networks in an "end-to-end" fashion. Here, a crucial problem is handling the imbalance of the positive and negative classes appropriately in order to be able to obtain predictive results (which is not addressed by many previous machine-learning-based approaches). The resulting network outperforms previous methods based on physically based features and optical flow methods (similar to operational prediction models) and generalizes across different years. A closer examination of the classifier performance over time and under masking of input data indicates that the learned model actually draws most information from structures in the visible spectrum, with infrared imaging sustaining some classification performance during the night. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. An Overview on Visual SLAM: From Tradition to Semantic.
- Author
-
Chen, Weifeng, Shang, Guangtao, Ji, Aihong, Zhou, Chengjun, Wang, Xiyang, Xu, Chonghui, Li, Zhenxiong, and Hu, Kai
- Subjects
- *
DEEP learning , *COMPUTER vision - Abstract
Visual SLAM (VSLAM) has been developing rapidly due to its advantages of low-cost sensors, the easy fusion of other sensors, and richer environmental information. Traditional visionbased SLAM research has made many achievements, but it may fail to achieve wished results in challenging environments. Deep learning has promoted the development of computer vision, and the combination of deep learning and SLAM has attracted more and more attention. Semantic information, as high-level environmental information, can enable robots to better understand the surrounding environment. This paper introduces the development of VSLAM technology from two aspects: traditional VSLAM and semantic VSLAM combined with deep learning. For traditional VSLAM, we summarize the advantages and disadvantages of indirect and direct methods in detail and give some classical VSLAM open-source algorithms. In addition, we focus on the development of semantic VSLAM based on deep learning. Starting with typical neural networks CNN and RNN, we summarize the improvement of neural networks for the VSLAM system in detail. Later, we focus on the help of target detection and semantic segmentation for VSLAM semantic information introduction. We believe that the development of the future intelligent era cannot be without the help of semantic technology. Introducing deep learning into the VSLAM system to provide semantic information can help robots better perceive the surrounding environment and provide people with higher-level help. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Recognition of the Bare Soil Using Deep Machine Learning Methods to Create Maps of Arable Soil Degradation Based on the Analysis of Multi-Temporal Remote Sensing Data.
- Author
-
Rukhovich, Dmitry I., Koroleva, Polina V., Rukhovich, Danila D., and Rukhovich, Alexey D.
- Subjects
- *
SOIL degradation , *MACHINE learning , *REMOTE sensing , *DEEP learning , *FALSE positive error - Abstract
The detection of degraded soil distribution areas is an urgent task. It is difficult and very time consuming to solve this problem using ground methods. The modeling of degradation processes based on digital elevation models makes it possible to construct maps of potential degradation, which may differ from the actual spatial distribution of degradation. The use of remote sensing data (RSD) for soil degradation detection is very widespread. Most often, vegetation indices (indicative botany) have been used for this purpose. In this paper, we propose a method for constructing soil maps based on a multi-temporal analysis of the bare soil surface (BSS). It is an alternative method to the use of vegetation indices. The detection of the bare soil surface was carried out using the spectral neighborhood of the soil line (SNSL) technology. For the automatic recognition of BSS on each RSD image, computer vision based on deep machine learning (neural networks) was used. A dataset of 244 BSS distribution masks on 244 Landsat 4, 5, 7, and 8 scenes over 37 years was developed. Half of the dataset was used as a training sample (Landsat path/row 173/028). The other half was used as a test sample (Landsat path/row 174/027). Binary masks were sufficient for recognition. For each RSD pixel, value "1" was set when determining the BSS. In the absence of BSS, value "0" was set. The accuracy of the machine prediction of the presence of BSS was 75%. The detection of degradation was based on the average long-term spectral characteristics of the RED and NIR bands. The coefficient Cmean, which is the distance of the point with the average long-term values of RED and NIR from the origin of the spectral plane RED/NIR, was calculated as an integral characteristic of the mean long-term values. Higher long-term average values of spectral brightness served as indicators of the spread of soil degradation. To test the method of constructing soil degradation maps based on deep machine learning, an acceptance sample of 133 Landsat scenes of path/row 173/026 was used. On the territory of the acceptance sample, ground verifications of the maps of the coefficient Cmean were carried out. Ground verification showed that the values of this coefficient make it possible to estimate the content of organic matter in the plow horizon (R2 = 0.841) and the thickness of the humus horizon (R2 = 0.8599). In total, 80 soil pits were analyzed on an area of 649 ha on eight agricultural fields. Type I error (false positive) of degradation detection was 17.5%, and type II error (false negative) was 2.5%. During the determination of the presence of degradation by ground methods, 90% of the ground data coincided with the detection of degradation from RSD. Thus, the quality of machine learning for BSS recognition is sufficient for the construction of soil degradation maps. The SNSL technology allows us to create maps of soil degradation based on the long-term average spectral characteristics of the BSS. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Synergic Use of Sentinel-1 and Sentinel-2 Images for Operational Soil Moisture Mapping at High Spatial Resolution over Agricultural Areas.
- Author
-
El Hajj, Mohammad, Baghdadi, Nicolas, Zribi, Mehrez, and Bazzi, Hassan
- Subjects
- *
SOIL moisture , *REMOTE-sensing images , *SOIL mapping , *HIGH resolution imaging , *GROUND vegetation cover - Abstract
Soil moisture mapping at a high spatial resolution is very important for several applications in hydrology, agriculture and risk assessment. With the arrival of the free Sentinel data at high spatial and temporal resolutions, the development of soil moisture products that can better meet the needs of users is now possible. In this context, the main objective of the present paper is to develop an operational approach for soil moisture mapping in agricultural areas at a high spatial resolution over bare soils, as well as soils with vegetation cover. The developed approach is based on the synergic use of radar and optical data. A neural network technique was used to develop an operational method for soil moisture estimates. Three inversion SAR (Synthetic Aperture Radar) configurations were tested: (1) VV polarization; (2) VH polarization; and (3) both VV and VH polarization, all in addition to the NDVI information extracted from optical images. Neural networks were developed and validated using synthetic and real databases. The results showed that the use of a priori information on the soil moisture condition increases the precision of the soil moisture estimates. The results showed that VV alone provides better accuracy on the soil moisture estimates than VH alone. In addition, the use of both VV and VH provides similar results, compared to VV alone. In conclusion, the soil moisture could be estimated in agricultural areas with an accuracy of approximately 5 vol % (volumetric unit expressed in percent). Better results were obtained for soil with a moderate surface roughness (for root mean surface height between 1 and 3 cm). The developed approach could be applied for agricultural plots with an NDVI lower than 0.75. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. Classifying Complex Mountainous Forests with L-Band SAR and Landsat Data Integration: A Comparison among Different Machine Learning Methods in the Hyrcanian Forest.
- Author
-
Attarchi, Sara and Gloaguen, Richard
- Subjects
- *
REMOTE sensing , *FORESTS & forestry , *LANDSAT satellites , *DATA integration , *MACHINE learning , *SUPPORT vector machines , *ARTIFICIAL neural networks , *RANDOM forest algorithms - Abstract
Forest environment classification in mountain regions based on single-sensor remote sensing approaches is hindered by forest complexity and topographic effects. Temperate broadleaf forests in western Asia such as the Hyrcanian forest in northern Iran have already suffered from intense anthropogenic activities. In those regions, forests mainly extend in rough terrain and comprise different stand structures, which are difficult to discriminate. This paper explores the joint analysis of Landsat7/ETM+, L-band SAR and their derived parameters and the effect of terrain corrections to overcome the challenges of discriminating forest stand age classes in mountain regions. We also verified the performances of three machine learning methods which have recently shown promising results using multisource data; support vector machines (SVM), neural networks (NN), random forest (RF) and one traditional classifier (i.e., maximum likelihood classification (MLC)) as a benchmark. The non-topographically corrected ETM+ data failed to differentiate among different forest stand age classes (average classification accuracy (OA) = 65%). This confirms the need to reduce relief effects prior data classification in mountain regions. SAR backscattering alone cannot properly differentiate among different forest stand age classes (OA = 62%). However, textures and PolSAR features are very efficient for the separation of forest classes (OA = 82%). The highest classification accuracy was achieved by the joint usage of SAR and ETM+ (OA = 86%). However, this shows a slight improvement compared to the ETM+ classification (OA = 84%). The machine learning classifiers proved t o be more robust and accurate compared to MLC. SVM and RF statistically produced better classification results than NN in the exploitation of the considered multi-source data. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
15. Classifying Complex Mountainous Forests with L-Band SAR and Landsat Data Integration: A Comparison among Different Machine Learning Methods in the Hyrcanian Forest.
- Author
-
Attarchi, Sara and Gloaguen, Richard
- Subjects
- *
LANDSAT satellites , *SUPPORT vector machines , *MAXIMUM likelihood detection , *ARTIFICIAL neural networks , *RANDOM forest algorithms , *TOPOGRAPHIC maps , *MACHINE learning - Abstract
Forest environment classification in mountain regions based on single-sensor remote sensing approaches is hindered by forest complexity and topographic effects. Temperate broadleaf forests in western Asia such as the Hyrcanian forest in northern Iran have already suffered from intense anthropogenic activities. In those regions, forests mainly extend in rough terrain and comprise different stand structures, which are difficult to discriminate. This paper explores the joint analysis of Landsat7/ETM+, L-band SAR and their derived parameters and the effect of terrain corrections to overcome the challenges of discriminating forest stand age classes in mountain regions. We also verified the performances of three machine learning methods which have recently shown promising results using multisource data; support vector machines (SVM), neural networks (NN), random forest (RF) and one traditional classifier (i.e., maximum likelihood classification (MLC)) as a benchmark. The non-topographically corrected ETM+ data failed to differentiate among different forest stand age classes (average classification accuracy (OA) = 65%). This confirms the need to reduce relief effects prior data classification in mountain regions. SAR backscattering alone cannot properly differentiate among different forest stand age classes (OA = 62%). However, textures and PolSAR features are very efficient for the separation of forest classes (OA = 82%). The highest classification accuracy was achieved by the joint usage of SAR and ETM+ (OA = 86%). However, this shows a slight improvement compared to the ETM+ classification (OA = 84%). The machine learning classifiers proved t o be more robust and accurate compared to MLC. SVM and RF statistically produced better classification results than NN in the exploitation of the considered multi-source data. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
16. S2Looking: A Satellite Side-Looking Dataset for Building Change Detection.
- Author
-
Shen, Li, Lu, Yao, Chen, Hao, Wei, Hao, Xie, Donghai, Yue, Jiabao, Chen, Rui, Lv, Shouye, and Jiang, Bitao
- Subjects
- *
REMOTE-sensing images , *LANDSAT satellites , *DEEP learning , *OPTICAL images , *RURAL geography , *REMOTE sensing - Abstract
Building-change detection underpins many important applications, especially in the military and crisis-management domains. Recent methods used for change detection have shifted towards deep learning, which depends on the quality of its training data. The assembly of large-scale annotated satellite imagery datasets is therefore essential for global building-change surveillance. Existing datasets almost exclusively offer near-nadir viewing angles. This limits the range of changes that can be detected. By offering larger observation ranges, the scroll imaging mode of optical satellites presents an opportunity to overcome this restriction. This paper therefore introduces S2Looking, a building-change-detection dataset that contains large-scale side-looking satellite images captured at various off-nadir angles. The dataset consists of 5000 bitemporal image pairs of rural areas and more than 65,920 annotated instances of changes throughout the world. The dataset can be used to train deep-learning-based change-detection algorithms. It expands upon existing datasets by providing (1) larger viewing angles; (2) large illumination variances; and (3) the added complexity of rural images. To facilitate the use of the dataset, a benchmark task has been established, and preliminary tests suggest that deep-learning algorithms find the dataset significantly more challenging than the closest-competing near-nadir dataset, LEVIR-CD+. S2Looking may therefore promote important advances in existing building-change-detection algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. An Overview of Neural Network Methods for Predicting Uncertainty in Atmospheric Remote Sensing.
- Author
-
Doicu, Adrian, Doicu, Alexandru, Efremenko, Dmitry S., Loyola, Diego, and Trautmann, Thomas
- Subjects
- *
REMOTE sensing , *INTERVAL analysis , *RADIATIVE transfer , *DEEP learning , *PROBLEM solving - Abstract
In this paper, we present neural network methods for predicting uncertainty in atmospheric remote sensing. These include methods for solving the direct and the inverse problem in a Bayesian framework. In the first case, a method based on a neural network for simulating the radiative transfer model and a Bayesian approach for solving the inverse problem is proposed. In the second case, (i) a neural network, in which the output is the convolution of the output for a noise-free input with the input noise distribution; and (ii) a Bayesian deep learning framework that predicts input aleatoric and model uncertainties, are designed. In addition, a neural network that uses assumed density filtering and interval arithmetic to compute uncertainty is employed for testing purposes. The accuracy and the precision of the methods are analyzed by considering the retrieval of cloud parameters from radiances measured by the Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Application of Deep Learning Architectures for Satellite Image Time Series Prediction: A Review.
- Author
-
Moskolaï, Waytehad Rose, Abdou, Wahabou, Dipanda, Albert, and Kolyang
- Subjects
- *
DEEP learning , *REMOTE-sensing images , *TIME series analysis , *CONVOLUTIONAL neural networks , *MACHINE learning , *ARTIFICIAL intelligence - Abstract
Satellite image time series (SITS) is a sequence of satellite images that record a given area at several consecutive times. The aim of such sequences is to use not only spatial information but also the temporal dimension of the data, which is used for multiple real-world applications, such as classification, segmentation, anomaly detection, and prediction. Several traditional machine learning algorithms have been developed and successfully applied to time series for predictions. However, these methods have limitations in some situations, thus deep learning (DL) techniques have been introduced to achieve the best performance. Reviews of machine learning and DL methods for time series prediction problems have been conducted in previous studies. However, to the best of our knowledge, none of these surveys have addressed the specific case of works using DL techniques and satellite images as datasets for predictions. Therefore, this paper concentrates on the DL applications for SITS prediction, giving an overview of the main elements used to design and evaluate the predictive models, namely the architectures, data, optimization functions, and evaluation metrics. The reviewed DL-based models are divided into three categories, namely recurrent neural network-based models, hybrid models, and feed-forward-based models (convolutional neural networks and multi-layer perceptron). The main characteristics of satellite images and the major existing applications in the field of SITS prediction are also presented in this article. These applications include weather forecasting, precipitation nowcasting, spatio-temporal analysis, and missing data reconstruction. Finally, current limitations and proposed workable solutions related to the use of DL for SITS prediction are also highlighted. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Integrating Multiple Datasets and Machine Learning Algorithms for Satellite-Based Bathymetry in Seaports.
- Author
-
Wu, Zhongqiang, Mao, Zhihua, and Shen, Wen
- Subjects
- *
MACHINE learning , *HARBORS , *WATER depth , *REMOTE-sensing images , *BATHYMETRY - Abstract
Water depth estimation in seaports is essential for effective port management. This paper presents an empirical approach for water depth determination from satellite imagery through the integration of multiple datasets and machine learning algorithms. The implementation details of the proposed approach are provided and compared against different existing machine learning algorithms with a single training set. For a single training set and a single machine learning method, our analysis shows that the proposed depth estimation method provides a better root-mean-square error (RMSE) and a higher coefficient of determination (R2) under turbid water conditions, with overall RMSE and R2 improvements of 1 cm and 0.7, respectively. The developed method may be employed in monitoring dredging activities, especially in areas with polluted water, mud and/or a high sediment content. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Adaptive Network Detector for Radar Target in Changing Scenes.
- Author
-
Jing, He, Cheng, Yongqiang, Wu, Hao, and Wang, Hongqiang
- Subjects
- *
RADAR targets , *DEEP learning , *DETECTORS , *SMART structures , *FALSE alarms - Abstract
Data-driven deep learning has been well applied in radar target detection. However, the performance of the detection network is severely degraded when the detection scene changes, since the trained network with the data from one scene is not suitable for another scene with different data distribution. In order to address this problem, an adaptive network detector combined with scene classification is proposed in this paper. Aiming at maximizing the posterior probability of the feature vectors, the scene classification network is arranged to control the output ratio of a group of detection sub-networks. Due to the uncertainty of classification error rate in traditional machine learning, the classifier with a controllable false alarm rate is constructed. In addition, a new network training strategy, which freezes the parameters of the scene classification network and selectively fine-tunes the parameters of detection sub-networks, is proposed for the adaptive network structure. Comprehensive experiments are carried out to demonstrate that the proposed method guarantees a high detection probability when the detection scene changes. Compared with some classical detectors, the adaptive network detector shows better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Editorial to Special Issue "Remote Sensing Data Compression".
- Author
-
Vozel, Benoit, Lukin, Vladimir, and Serra-Sagristà, Joan
- Subjects
- *
DATA compression , *REMOTE sensing , *IMAGE processing - Abstract
A huge amount of remote sensing data is acquired each day, which is transferred to image processing centers and/or to customers. Due to different limitations, compression has to be applied on-board and/or on-the-ground. This Special Issue collects 15 papers dealing with remote sensing data compression, introducing solutions for both lossless and lossy compression, analyzing the impact of compression on different processes, investigating the suitability of neural networks for compression, and researching on low complexity hardware and software approaches to deliver competitive coding performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Reduced-Complexity End-to-End Variational Autoencoder for on Board Satellite Image Compression.
- Author
-
Alves de Oliveira, Vinicius, Chabert, Marie, Oberlin, Thomas, Poulliat, Charly, Bruno, Mickael, Latry, Christophe, Carlavan, Mikael, Henrot, Simon, Falzon, Frederic, Camarero, Roberto, and Lukin, Vladimir
- Subjects
- *
IMAGE compression , *REMOTE-sensing images , *VIDEO coding , *CONVOLUTIONAL neural networks , *COMPUTATIONAL complexity , *IMAGE representation - Abstract
Recently, convolutional neural networks have been successfully applied to lossy image compression. End-to-end optimized autoencoders, possibly variational, are able to dramatically outperform traditional transform coding schemes in terms of rate-distortion trade-off; however, this is at the cost of a higher computational complexity. An intensive training step on huge databases allows autoencoders to learn jointly the image representation and its probability distribution, possibly using a non-parametric density model or a hyperprior auxiliary autoencoder to eliminate the need for prior knowledge. However, in the context of on board satellite compression, time and memory complexities are submitted to strong constraints. The aim of this paper is to design a complexity-reduced variational autoencoder in order to meet these constraints while maintaining the performance. Apart from a network dimension reduction that systematically targets each parameter of the analysis and synthesis transforms, we propose a simplified entropy model that preserves the adaptability to the input image. Indeed, a statistical analysis performed on satellite images shows that the Laplacian distribution fits most features of their representation. A complex non parametric distribution fitting or a cumbersome hyperprior auxiliary autoencoder can thus be replaced by a simple parametric estimation. The proposed complexity-reduced autoencoder outperforms the Consultative Committee for Space Data Systems standard (CCSDS 122.0-B) while maintaining a competitive performance, in terms of rate-distortion trade-off, in comparison with the state-of-the-art learned image compression schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. DR-Net: An Improved Network for Building Extraction from High Resolution Remote Sensing Image.
- Author
-
Chen, Meng, Wu, Jianjun, Liu, Leizhen, Zhao, Wenhui, Tian, Feng, Shen, Qiu, Zhao, Bingyu, and Du, Ruohua
- Subjects
- *
REMOTE sensing , *CONVOLUTIONAL neural networks , *ARTIFICIAL neural networks , *BUILDING performance - Abstract
At present, convolutional neural networks (CNN) have been widely used in building extraction from remote sensing imagery (RSI), but there are still some bottlenecks. On the one hand, there are so many parameters in the previous network with complex structure, which will occupy lots of memories and consume much time during training process. On the other hand, low-level features extracted by shallow layers and abstract features extracted by deep layers of artificial neural network cannot be fully fused, which leads to an inaccurate building extraction from RSI. To alleviate these disadvantages, a dense residual neural network (DR-Net) was proposed in this paper. DR-Net uses a deeplabv3+Net encoder/decoder backbone, in combination with densely connected convolution neural network (DCNN) and residual network (ResNet) structure. Compared with deeplabv3+net (containing about 41 million parameters) and BRRNet (containing about 17 million parameters), DR-Net contains about 9 million parameters; So, the number of parameters reduced a lot. The experimental results for both the WHU Building Dataset and Massachusetts Building Dataset, DR-Net show better performance in building extraction than other two state-of-the-art methods. Experiments on WHU building data set showed that Intersection over Union (IoU) increased by 2.4% and F1 score increased by 1.4%; in terms of Massachusetts Building Dataset, IoU increased by 3.8% and F1 score increased by 2.9%. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. The Use of Deep Machine Learning for the Automated Selection of Remote Sensing Data for the Determination of Areas of Arable Land Degradation Processes Distribution.
- Author
-
Rukhovich, Dmitry I., Koroleva, Polina V., Rukhovich, Danila D., and Kalinina, Natalia V.
- Subjects
- *
DEEP learning , *MACHINE learning , *REMOTE sensing , *ARABLE land , *SOIL degradation , *FALSE positive error , *LAND degradation , *PARTICLE size determination - Abstract
Soil degradation processes are widespread on agricultural land. Ground-based methods for detecting degradation require a lot of labor and time. Remote methods based on the analysis of vegetation indices can significantly reduce the volume of ground surveys. Currently, machine learning methods are increasingly being used to analyze remote sensing data. In this paper, the task is set to apply deep machine learning methods and methods of vegetation indices calculation to automate the detection of areas of soil degradation development on arable land. In the course of the work, a method was developed for determining the location of degraded areas of soil cover on arable fields. The method is based on the use of multi-temporal remote sensing data. The selection of suitable remote sensing data scenes is based on deep machine learning. Deep machine learning was based on an analysis of 1028 scenes of Landsats 4, 5, 7 and 8 on 530 agricultural fields. Landsat data from 1984 to 2019 was analyzed. Dataset was created manually for each pair of "Landsat scene"/"agricultural field number"(for each agricultural field, the suitability of each Landsat scene was assessed). Areas of soil degradation were calculated based on the frequency of occurrence of low NDVI values over 35 years. Low NDVI values were calculated separately for each suitable fragment of the satellite image within the boundaries of each agricultural field. NDVI values of one-third of the field area and lower than the other two-thirds were considered low. During testing, the method gave 12.5% of type I errors (false positive) and 3.8% of type II errors (false negative). Independent verification of the method was carried out on six agricultural fields on an area of 713.3 hectares. Humus content and thickness of the humus horizon were determined in 42 ground-based points. In arable land degradation areas identified by the proposed method, the probability of detecting soil degradation by field methods was 87.5%. The probability of detecting soil degradation by ground-based methods outside the predicted regions was 3.8%. The results indicate that deep machine learning is feasible for remote sensing data selection based on a binary dataset. This eliminates the need for intermediate filtering systems in the selection of satellite imagery (determination of clouds, shadows from clouds, open soil surface, etc.). Direct selection of Landsat scenes suitable for calculations has been made. It allows automating the process of constructing soil degradation maps. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Weakly Supervised Change Detection Based on Edge Mapping and SDAE Network in High-Resolution Remote Sensing Images.
- Author
-
Lu, Ning, Chen, Can, Shi, Wenbo, Zhang, Junwei, and Ma, Jianfeng
- Subjects
- *
REMOTE-sensing images , *SURFACE of the earth , *OPTICAL remote sensing - Abstract
Change detection for high-resolution remote sensing images is more and more widespread in the application of monitoring the Earth's surface. However, on the one hand, the ground truth could facilitate the distinction between changed and unchanged areas, but it is hard to acquire them. On the other hand, due to the complexity of remote sensing images, it is difficult to extract features of difference, let alone the construction of the classification model that performs change detection based on the features of difference in each pixel pair. Aiming at these challenges, this paper proposes a weakly supervised change detection method based on edge mapping and Stacked Denoising Auto-Encoders (SDAE) network called EM-SDAE. We analyze the difference in edge maps of bi-temporal remote sensing images to acquire part of the ground truth at a relatively low cost. Moreover, we design a neural network based on SDAE with a deep structure, which extracts the features of difference so as to efficiently classify changed and unchanged regions after being trained with the ground truth. In our experiments, three real sets of high-resolution remote sensing images are employed to validate the high efficiency of our proposed method. The results show that accuracy can even reach up to 91.18% with our method. In particular, compared with the state-of-the-art work (e.g., IR-MAD, PCA-k-means, CaffeNet, USFA, and DSFA), it improves the Kappa coefficient by 27.19% on average. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. Convolutional Neural Network with Spatial-Variant Convolution Kernel.
- Author
-
Dai, Yongpeng, Jin, Tian, Song, Yongkun, Sun, Shilong, and Wu, Chen
- Subjects
- *
CONVOLUTIONAL neural networks , *IMAGE processing , *IMAGE recognition (Computer vision) , *IMAGE intensifiers , *MIMO radar - Abstract
Radar images suffer from the impact of sidelobes. Several sidelobe-suppressing methods including the convolutional neural network (CNN)-based one has been proposed. However, the point spread function (PSF) in the radar images is sometimes spatially variant and affects the performance of the CNN. We propose the spatial-variant convolutional neural network (SV-CNN) aimed at this problem. It will also perform well in other conditions when there are spatially variant features. The convolutional kernels of the CNN can detect motifs with some distinctive features and are invariant to the local position of the motifs. This makes the convolutional neural networks widely used in image processing fields such as image recognition, handwriting recognition, image super-resolution, and semantic segmentation. They also perform well in radar image enhancement. However, the local position invariant character might not be good for radar image enhancement, when features of motifs (also known as the point spread function in the radar imaging field) vary with the positions. In this paper, we proposed an SV-CNN with spatial-variant convolution kernels (SV-CK). Its function is illustrated through a special application of enhancing the radar images. After being trained using radar images with position-codings as the samples, the SV-CNN can enhance the radar images. Because the SV-CNN reads information of the local position contained in the position-coding, it performs better than the conventional CNN. The advance of the proposed SV-CNN is tested using both simulated and real radar images. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. Vehicle and Vessel Detection on Satellite Imagery: A Comparative Study on Single-Shot Detectors.
- Author
-
Ophoff, Tanguy, Puttemans, Steven, Kalogirou, Vasileios, Robin, Jean-Philippe, and Goedemé, Toon
- Subjects
- *
REMOTE-sensing images , *DETECTORS , *IMAGE processing , *RADARSAT satellites , *COMPARATIVE studies , *RAILROAD trains - Abstract
In this paper, we investigate the feasibility of automatic small object detection, such as vehicles and vessels, in satellite imagery with a spatial resolution between 0.3 and 0.5 m. The main challenges of this task are the small objects, as well as the spread in object sizes, with objects ranging from 5 to a few hundred pixels in length. We first annotated 1500 km2, making sure to have equal amounts of land and water data. On top of this dataset we trained and evaluated four different single-shot object detection networks: YOLOV2, YOLOV3, D-YOLO and YOLT, adjusting the many hyperparameters to achieve maximal accuracy. We performed various experiments to better understand the performance and differences between the models. The best performing model, D-YOLO, reached an average precision of 60% for vehicles and 66% for vessels and can process an image of around 1 Gpx in 14 s. We conclude that these models, if properly tuned, can thus indeed be used to help speed up the workflows of satellite data analysts and to create even bigger datasets, making it possible to train even better models in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. Automatic Mapping of Center Pivot Irrigation Systems from Satellite Images Using Deep Learning.
- Author
-
Saraiva, Marciano, Protas, Églen, Salgado, Moisés, and Souza, Carlos
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *REMOTE-sensing images , *IRRIGATION , *GOLF course maintenance , *WATER management , *IMAGE segmentation - Abstract
The availability of freshwater is becoming a global concern. Because agricultural consumption has been increasing steadily, the mapping of irrigated areas is key for supporting the monitoring of land use and better management of available water resources. In this paper, we propose a method to automatically detect and map center pivot irrigation systems using U-Net, an image segmentation convolutional neural network architecture, applied to a constellation of PlanetScope images from the Cerrado biome of Brazil. Our objective is to provide a fast and accurate alternative to map center pivot irrigation systems with very high spatial and temporal resolution imagery. We implemented a modified U-Net architecture using the TensorFlow library and trained it on the Google cloud platform with a dataset built from more than 42,000 very high spatial resolution PlanetScope images acquired between August 2017 and November 2018. The U-Net implementation achieved a precision of 99% and a recall of 88% to detect and map center pivot irrigation systems in our study area. This method, proposed to detect and map center pivot irrigation systems, has the potential to be scaled to larger areas and improve the monitoring of freshwater use by agricultural activities. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Application of DINCAE to Reconstruct the Gaps in Chlorophyll-a Satellite Observations in the South China Sea and West Philippine Sea.
- Author
-
Han, Zhaohui, He, Yijun, Liu, Guoqiang, and Perrie, William
- Subjects
- *
ARTIFICIAL neural networks , *STANDARD deviations , *OCEAN temperature , *ORTHOGONAL functions , *CONTINENTAL shelf , *ARTIFICIAL satellites , *SOLAR radiation , *ARTIFICIAL satellite tracking - Abstract
The Data Interpolating Empirical Orthogonal Functions (DINEOF) method has demonstrated usability and accuracy for filling spatial gaps in remote sensing datasets. In this study, we conducted the reconstruction of the chlorophyll-a concentration (Chl-a) data using a convolutional neural networks model called Data-Interpolating Convolutional Auto-Encoder (DINCAE), and we compared its performance with that of DINEOF. Furthermore, the cloud-free sea surface temperature (SST) was used as a phytoplankton dynamics predictor for the Chl-a reconstruction. Finally, four reconstruction schemes were implemented: DINCAE (Chl-a only), DINCAE (Chl-a and SST), DINEOF (Chl-a only), and DINEOF (Chl-a and SST), denoted rec1, rec2, rec3, and rec4 respectively. To quantitatively evaluate the accuracy of these reconstruction schemes, both the cross-validation and in situ data were used. The study domain was chosen to be the Northern South China Sea (SCS) and West Philippine Sea (WPS), bounded by 115–125°E and 16–24°N to test the model performance for the reconstruction of Chl-a under different Chl-a controlling mechanisms. The in situ validation showed that rec1 performs best among the four reconstruction schemes, and that adding SST into the Chl-a reconstruction cannot improve the reconstruction results. However, for cross validation, adding SST can slightly improve spatial distributions of the root mean square error (RMSE) between the reconstructed data and the original data, especially over the SCS continental shelf. Furthermore, the potential of DINCAE prediction is confirmed in this paper; thus, the trained DINCAE model can be re-applied to reconstruct other missing data, and more importantly, it can also be re-trained using the reconstructed data, thereby further improving reconstruction results. Another consideration is efficiency; with similar reconstruction conditions, DINCAE is 5–10 times faster than DINEOF. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Beyond GIS Layering: Challenging the (Re)use and Fusion of Archaeological Prospection Data Based on Bayesian Neural Networks (BNN).
- Author
-
Agapiou, Athos and Sarris, Apostolos
- Subjects
- *
ARCHAEOLOGICAL excavations , *REMOTE sensing , *GROUND penetrating radar , *ARTIFICIAL neural networks , *DATA acquisition systems - Abstract
Multisource remote sensing data acquisition has been increased in the last years due to technological improvements and decreased acquisition cost of remotely sensed data and products. This study attempts to fuse different types of prospection data acquired from dissimilar remote sensors and explores new ways of interpreting remote sensing data obtained from archaeological sites. Combination and fusion of complementary sensory data does not only increase the detection accuracy but it also increases the overall performance in respect to recall and precision. Moving beyond the discussion and concerns related to fusion and integration of multisource prospection data, this study argues their potential (re)use based on Bayesian Neural Network (BNN) fusion models. The archaeological site of Vésztő-Mágor Tell in the eastern part of Hungary was selected as a case study, since ground penetrating radar (GPR) and ground spectral signatures have been collected in the past. GPR 20 cm depth slices results were correlated with spectroradiometric datasets based on neural network models. The results showed that the BNN models provide a global correlation coefficient of up to 73%—between the GPR and the spectroradiometric data—for all depth slices. This could eventually lead to the potential re-use of archived geo-prospection datasets with optical earth observation datasets. A discussion regarding the potential limitations and challenges of this approach is also included in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. The Passive Microwave Neural Network Precipitation Retrieval (PNPR) Algorithm for the CONICAL Scanning Global Microwave Imager (GMI) Radiometer.
- Author
-
Sanò, Paolo, Panegrossi, Giulia, Casella, Daniele, Marra, Anna C., D'Adderio, Leo P., Rysman, Jean F., and Dietrich, Stefano
- Subjects
- *
ALGORITHMS , *MICROWAVE radiometers , *RAIN gauges , *MICROWAVES , *RADIOMETERS , *RAINFALL - Abstract
This paper describes a new rainfall rate retrieval algorithm, developed within the EUMETSAT H SAF program, based on the Passive microwave Neural network Precipitation Retrieval approach (PNPR v3), designed to work with the conically scanning Global Precipitation Measurement (GPM) Microwave Imager (GMI). A new rain/no-rain classification scheme, also based on the NN approach, which provides different rainfall masks for different minimum thresholds and degree of reliability, is also described. The algorithm is trained on an extremely large observational database, built from GPM global observations between 2014 and 2016, where the NASA 2B-CMB (V04) rainfall rate product is used as reference. In order to assess the performance of PNPR v3 over the globe, an independent part of the observational database is used in a verification study. The good results found over all surface types (CC > 0.90, ME < −0.22 mm h−1, RMSE < 2.75 mm h−1 and FSE% < 100% for rainfall rates lower than 1 mm h−1 and around 30–50% for moderate to high rainfall rates), demonstrate the good outcome of the input selection procedure, as well as of the training and design phase of the neural network. For further verification, two case studies over Italy are also analysed and a good consistency of PNPR v3 retrievals with simultaneous ground radar observations and with the GMI GPROF V05 estimates is found. PNPR v3 is a global rainfall retrieval algorithm, able to optimally exploit the GMI multi-channel response to different surface types and precipitation structures, that provide global rainfall retrieval in a computationally very efficient way, making the product suitable for near-real time operational applications. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.