10 results
Search Results
2. Study on Rapid Inversion of Soil Water Content from Ground-Penetrating Radar Data Based on Deep Learning.
- Author
-
Li, Zhilian, Zeng, Zhaofa, Xiong, Hongqiang, Lu, Qi, An, Baizhou, Yan, Jiahe, Li, Risheng, Xia, Longfei, Wang, Haoyu, and Liu, Kexin
- Subjects
- *
GROUND penetrating radar , *SOIL moisture , *DEEP learning , *EARTH sciences , *SOIL sampling - Abstract
Ground-penetrating radar (GPR) is an efficient and nondestructive geophysical method with great potential for detecting soil water content at the farmland scale. However, a key challenge in soil detection is obtaining soil water content rapidly and in real-time. In recent years, deep learning methods have become more widespread in the earth sciences, making it possible to use them for soil water content inversion from GPR data. In this paper, we propose a neural network framework GPRSW based on deep learning of GPR data. GPRSW is an end-to-end network that directly inverts volumetric soil water content (VSWC) through single-channel GPR data. Synthetic experiments show that GPRSW accurately identifies different VSWC boundaries in the model in time depth. The predicted VSWC and model fit well within 40 ns, with a maximum error after 40 ns of less than 0.10 cm3 × cm−3. To validate our method, we conducted GPR measurements at the experimental field of the Academy of Agricultural Sciences in Gongzhuling City, Jilin Province and applied GPRSW to VSWC measurements. The results show that predicted values of GPRSW match with field soil samples and are consistent with the overall trend of the TDR soil probe samples, with a maximum difference not exceeding 0.03 cm3 × cm−3. Therefore, our study shows that GPRSW has the potential to be applied to obtain soil water content from GPR data on farmland. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. An Overview on Visual SLAM: From Tradition to Semantic.
- Author
-
Chen, Weifeng, Shang, Guangtao, Ji, Aihong, Zhou, Chengjun, Wang, Xiyang, Xu, Chonghui, Li, Zhenxiong, and Hu, Kai
- Subjects
- *
DEEP learning , *COMPUTER vision - Abstract
Visual SLAM (VSLAM) has been developing rapidly due to its advantages of low-cost sensors, the easy fusion of other sensors, and richer environmental information. Traditional visionbased SLAM research has made many achievements, but it may fail to achieve wished results in challenging environments. Deep learning has promoted the development of computer vision, and the combination of deep learning and SLAM has attracted more and more attention. Semantic information, as high-level environmental information, can enable robots to better understand the surrounding environment. This paper introduces the development of VSLAM technology from two aspects: traditional VSLAM and semantic VSLAM combined with deep learning. For traditional VSLAM, we summarize the advantages and disadvantages of indirect and direct methods in detail and give some classical VSLAM open-source algorithms. In addition, we focus on the development of semantic VSLAM based on deep learning. Starting with typical neural networks CNN and RNN, we summarize the improvement of neural networks for the VSLAM system in detail. Later, we focus on the help of target detection and semantic segmentation for VSLAM semantic information introduction. We believe that the development of the future intelligent era cannot be without the help of semantic technology. Introducing deep learning into the VSLAM system to provide semantic information can help robots better perceive the surrounding environment and provide people with higher-level help. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Recognition of the Bare Soil Using Deep Machine Learning Methods to Create Maps of Arable Soil Degradation Based on the Analysis of Multi-Temporal Remote Sensing Data.
- Author
-
Rukhovich, Dmitry I., Koroleva, Polina V., Rukhovich, Danila D., and Rukhovich, Alexey D.
- Subjects
- *
SOIL degradation , *MACHINE learning , *REMOTE sensing , *DEEP learning , *FALSE positive error - Abstract
The detection of degraded soil distribution areas is an urgent task. It is difficult and very time consuming to solve this problem using ground methods. The modeling of degradation processes based on digital elevation models makes it possible to construct maps of potential degradation, which may differ from the actual spatial distribution of degradation. The use of remote sensing data (RSD) for soil degradation detection is very widespread. Most often, vegetation indices (indicative botany) have been used for this purpose. In this paper, we propose a method for constructing soil maps based on a multi-temporal analysis of the bare soil surface (BSS). It is an alternative method to the use of vegetation indices. The detection of the bare soil surface was carried out using the spectral neighborhood of the soil line (SNSL) technology. For the automatic recognition of BSS on each RSD image, computer vision based on deep machine learning (neural networks) was used. A dataset of 244 BSS distribution masks on 244 Landsat 4, 5, 7, and 8 scenes over 37 years was developed. Half of the dataset was used as a training sample (Landsat path/row 173/028). The other half was used as a test sample (Landsat path/row 174/027). Binary masks were sufficient for recognition. For each RSD pixel, value "1" was set when determining the BSS. In the absence of BSS, value "0" was set. The accuracy of the machine prediction of the presence of BSS was 75%. The detection of degradation was based on the average long-term spectral characteristics of the RED and NIR bands. The coefficient Cmean, which is the distance of the point with the average long-term values of RED and NIR from the origin of the spectral plane RED/NIR, was calculated as an integral characteristic of the mean long-term values. Higher long-term average values of spectral brightness served as indicators of the spread of soil degradation. To test the method of constructing soil degradation maps based on deep machine learning, an acceptance sample of 133 Landsat scenes of path/row 173/026 was used. On the territory of the acceptance sample, ground verifications of the maps of the coefficient Cmean were carried out. Ground verification showed that the values of this coefficient make it possible to estimate the content of organic matter in the plow horizon (R2 = 0.841) and the thickness of the humus horizon (R2 = 0.8599). In total, 80 soil pits were analyzed on an area of 649 ha on eight agricultural fields. Type I error (false positive) of degradation detection was 17.5%, and type II error (false negative) was 2.5%. During the determination of the presence of degradation by ground methods, 90% of the ground data coincided with the detection of degradation from RSD. Thus, the quality of machine learning for BSS recognition is sufficient for the construction of soil degradation maps. The SNSL technology allows us to create maps of soil degradation based on the long-term average spectral characteristics of the BSS. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. S2Looking: A Satellite Side-Looking Dataset for Building Change Detection.
- Author
-
Shen, Li, Lu, Yao, Chen, Hao, Wei, Hao, Xie, Donghai, Yue, Jiabao, Chen, Rui, Lv, Shouye, and Jiang, Bitao
- Subjects
- *
REMOTE-sensing images , *LANDSAT satellites , *DEEP learning , *OPTICAL images , *RURAL geography , *REMOTE sensing - Abstract
Building-change detection underpins many important applications, especially in the military and crisis-management domains. Recent methods used for change detection have shifted towards deep learning, which depends on the quality of its training data. The assembly of large-scale annotated satellite imagery datasets is therefore essential for global building-change surveillance. Existing datasets almost exclusively offer near-nadir viewing angles. This limits the range of changes that can be detected. By offering larger observation ranges, the scroll imaging mode of optical satellites presents an opportunity to overcome this restriction. This paper therefore introduces S2Looking, a building-change-detection dataset that contains large-scale side-looking satellite images captured at various off-nadir angles. The dataset consists of 5000 bitemporal image pairs of rural areas and more than 65,920 annotated instances of changes throughout the world. The dataset can be used to train deep-learning-based change-detection algorithms. It expands upon existing datasets by providing (1) larger viewing angles; (2) large illumination variances; and (3) the added complexity of rural images. To facilitate the use of the dataset, a benchmark task has been established, and preliminary tests suggest that deep-learning algorithms find the dataset significantly more challenging than the closest-competing near-nadir dataset, LEVIR-CD+. S2Looking may therefore promote important advances in existing building-change-detection algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. An Overview of Neural Network Methods for Predicting Uncertainty in Atmospheric Remote Sensing.
- Author
-
Doicu, Adrian, Doicu, Alexandru, Efremenko, Dmitry S., Loyola, Diego, and Trautmann, Thomas
- Subjects
- *
REMOTE sensing , *INTERVAL analysis , *RADIATIVE transfer , *DEEP learning , *PROBLEM solving - Abstract
In this paper, we present neural network methods for predicting uncertainty in atmospheric remote sensing. These include methods for solving the direct and the inverse problem in a Bayesian framework. In the first case, a method based on a neural network for simulating the radiative transfer model and a Bayesian approach for solving the inverse problem is proposed. In the second case, (i) a neural network, in which the output is the convolution of the output for a noise-free input with the input noise distribution; and (ii) a Bayesian deep learning framework that predicts input aleatoric and model uncertainties, are designed. In addition, a neural network that uses assumed density filtering and interval arithmetic to compute uncertainty is employed for testing purposes. The accuracy and the precision of the methods are analyzed by considering the retrieval of cloud parameters from radiances measured by the Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Application of Deep Learning Architectures for Satellite Image Time Series Prediction: A Review.
- Author
-
Moskolaï, Waytehad Rose, Abdou, Wahabou, Dipanda, Albert, and Kolyang
- Subjects
- *
DEEP learning , *REMOTE-sensing images , *TIME series analysis , *CONVOLUTIONAL neural networks , *MACHINE learning , *ARTIFICIAL intelligence - Abstract
Satellite image time series (SITS) is a sequence of satellite images that record a given area at several consecutive times. The aim of such sequences is to use not only spatial information but also the temporal dimension of the data, which is used for multiple real-world applications, such as classification, segmentation, anomaly detection, and prediction. Several traditional machine learning algorithms have been developed and successfully applied to time series for predictions. However, these methods have limitations in some situations, thus deep learning (DL) techniques have been introduced to achieve the best performance. Reviews of machine learning and DL methods for time series prediction problems have been conducted in previous studies. However, to the best of our knowledge, none of these surveys have addressed the specific case of works using DL techniques and satellite images as datasets for predictions. Therefore, this paper concentrates on the DL applications for SITS prediction, giving an overview of the main elements used to design and evaluate the predictive models, namely the architectures, data, optimization functions, and evaluation metrics. The reviewed DL-based models are divided into three categories, namely recurrent neural network-based models, hybrid models, and feed-forward-based models (convolutional neural networks and multi-layer perceptron). The main characteristics of satellite images and the major existing applications in the field of SITS prediction are also presented in this article. These applications include weather forecasting, precipitation nowcasting, spatio-temporal analysis, and missing data reconstruction. Finally, current limitations and proposed workable solutions related to the use of DL for SITS prediction are also highlighted. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Adaptive Network Detector for Radar Target in Changing Scenes.
- Author
-
Jing, He, Cheng, Yongqiang, Wu, Hao, and Wang, Hongqiang
- Subjects
- *
RADAR targets , *DEEP learning , *DETECTORS , *SMART structures , *FALSE alarms - Abstract
Data-driven deep learning has been well applied in radar target detection. However, the performance of the detection network is severely degraded when the detection scene changes, since the trained network with the data from one scene is not suitable for another scene with different data distribution. In order to address this problem, an adaptive network detector combined with scene classification is proposed in this paper. Aiming at maximizing the posterior probability of the feature vectors, the scene classification network is arranged to control the output ratio of a group of detection sub-networks. Due to the uncertainty of classification error rate in traditional machine learning, the classifier with a controllable false alarm rate is constructed. In addition, a new network training strategy, which freezes the parameters of the scene classification network and selectively fine-tunes the parameters of detection sub-networks, is proposed for the adaptive network structure. Comprehensive experiments are carried out to demonstrate that the proposed method guarantees a high detection probability when the detection scene changes. Compared with some classical detectors, the adaptive network detector shows better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. The Use of Deep Machine Learning for the Automated Selection of Remote Sensing Data for the Determination of Areas of Arable Land Degradation Processes Distribution.
- Author
-
Rukhovich, Dmitry I., Koroleva, Polina V., Rukhovich, Danila D., and Kalinina, Natalia V.
- Subjects
- *
DEEP learning , *MACHINE learning , *REMOTE sensing , *ARABLE land , *SOIL degradation , *FALSE positive error , *LAND degradation , *PARTICLE size determination - Abstract
Soil degradation processes are widespread on agricultural land. Ground-based methods for detecting degradation require a lot of labor and time. Remote methods based on the analysis of vegetation indices can significantly reduce the volume of ground surveys. Currently, machine learning methods are increasingly being used to analyze remote sensing data. In this paper, the task is set to apply deep machine learning methods and methods of vegetation indices calculation to automate the detection of areas of soil degradation development on arable land. In the course of the work, a method was developed for determining the location of degraded areas of soil cover on arable fields. The method is based on the use of multi-temporal remote sensing data. The selection of suitable remote sensing data scenes is based on deep machine learning. Deep machine learning was based on an analysis of 1028 scenes of Landsats 4, 5, 7 and 8 on 530 agricultural fields. Landsat data from 1984 to 2019 was analyzed. Dataset was created manually for each pair of "Landsat scene"/"agricultural field number"(for each agricultural field, the suitability of each Landsat scene was assessed). Areas of soil degradation were calculated based on the frequency of occurrence of low NDVI values over 35 years. Low NDVI values were calculated separately for each suitable fragment of the satellite image within the boundaries of each agricultural field. NDVI values of one-third of the field area and lower than the other two-thirds were considered low. During testing, the method gave 12.5% of type I errors (false positive) and 3.8% of type II errors (false negative). Independent verification of the method was carried out on six agricultural fields on an area of 713.3 hectares. Humus content and thickness of the humus horizon were determined in 42 ground-based points. In arable land degradation areas identified by the proposed method, the probability of detecting soil degradation by field methods was 87.5%. The probability of detecting soil degradation by ground-based methods outside the predicted regions was 3.8%. The results indicate that deep machine learning is feasible for remote sensing data selection based on a binary dataset. This eliminates the need for intermediate filtering systems in the selection of satellite imagery (determination of clouds, shadows from clouds, open soil surface, etc.). Direct selection of Landsat scenes suitable for calculations has been made. It allows automating the process of constructing soil degradation maps. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Automatic Mapping of Center Pivot Irrigation Systems from Satellite Images Using Deep Learning.
- Author
-
Saraiva, Marciano, Protas, Églen, Salgado, Moisés, and Souza, Carlos
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *REMOTE-sensing images , *IRRIGATION , *GOLF course maintenance , *WATER management , *IMAGE segmentation - Abstract
The availability of freshwater is becoming a global concern. Because agricultural consumption has been increasing steadily, the mapping of irrigated areas is key for supporting the monitoring of land use and better management of available water resources. In this paper, we propose a method to automatically detect and map center pivot irrigation systems using U-Net, an image segmentation convolutional neural network architecture, applied to a constellation of PlanetScope images from the Cerrado biome of Brazil. Our objective is to provide a fast and accurate alternative to map center pivot irrigation systems with very high spatial and temporal resolution imagery. We implemented a modified U-Net architecture using the TensorFlow library and trained it on the Google cloud platform with a dataset built from more than 42,000 very high spatial resolution PlanetScope images acquired between August 2017 and November 2018. The U-Net implementation achieved a precision of 99% and a recall of 88% to detect and map center pivot irrigation systems in our study area. This method, proposed to detect and map center pivot irrigation systems, has the potential to be scaled to larger areas and improve the monitoring of freshwater use by agricultural activities. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.