21 results on '"Ao, Zurui"'
Search Results
2. Decreased river runoff on the Mongolian Plateau since around 2000
- Author
-
Qi, Wenhua, Hu, Xiaomei, Bai, Hao, Yusup, Asadilla, Ran, Qinwei, Yang, Hui, Wang, Haijun, Ao, Zurui, and Tao, Shengli
- Published
- 2024
- Full Text
- View/download PDF
3. Exploring the seasonal effects of urban morphology on land surface temperature in urban functional zones
- Author
-
Liu, Yefei, Zhang, Weijie, Liu, Wenkai, Tan, Zhangzhi, Hu, Sheng, Ao, Zurui, Li, Jiaju, and Xing, Hanfa
- Published
- 2024
- Full Text
- View/download PDF
4. Super-Resolution Image Reconstruction Method between Sentinel-2 and Gaofen-2 Based on Cascaded Generative Adversarial Networks.
- Author
-
Wang, Xinyu, Ao, Zurui, Li, Runhao, Fu, Yingchun, Xue, Yufei, and Ge, Yunxin
- Subjects
GENERATIVE adversarial networks ,IMAGE reconstruction ,HIGH resolution imaging - Abstract
Due to the multi-scale and spectral features of remote sensing images compared to natural images, there are significant challenges in super-resolution reconstruction (SR) tasks. Networks trained on simulated data often exhibit poor reconstruction performance on real low-resolution (LR) images. Additionally, compared to natural images, remote sensing imagery involves fewer high-frequency components in network construction. To address the above issues, we introduce a new high–low-resolution dataset GF_Sen based on GaoFen-2 and Sentinel-2 images and propose a cascaded network CSWGAN combined with spatial–frequency features. Firstly, based on the proposed self-attention GAN (SGAN) and wavelet-based GAN (WGAN) in this study, the CSWGAN combines the strengths of both networks. It not only models long-range dependencies and better utilizes global feature information, but also extracts frequency content differences between different images, enhancing the learning of high-frequency information. Experiments have shown that the networks trained based on the GF_Sen can achieve better performance than those trained on simulated data. The reconstructed images from the CSWGAN demonstrate improvements in the PSNR and SSIM by 4.375 and 4.877, respectively, compared to the relatively optimal performance of the ESRGAN. The CSWGAN can reflect the reconstruction advantages of a high-frequency scene and provides a working foundation for fine-scale applications in remote sensing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Developing a Multi-Scale Convolutional Neural Network for Spatiotemporal Fusion to Generate MODIS-like Data Using AVHRR and Landsat Images.
- Author
-
Zhang, Zhicheng, Ao, Zurui, Wu, Wei, Wang, Yidan, and Xin, Qinchuan
- Subjects
- *
CONVOLUTIONAL neural networks , *LANDSAT satellites , *MODIS (Spectroradiometer) , *OPTICAL sensors , *STANDARD deviations , *REMOTE sensing - Abstract
Remote sensing data are becoming increasingly important for quantifying long-term changes in land surfaces. Optical sensors onboard satellite platforms face a tradeoff between temporal and spatial resolutions. Spatiotemporal fusion models can produce high spatiotemporal data, while existing models are not designed to produce moderate-spatial-resolution data, like Moderate-Resolution Imaging Spectroradiometer (MODIS), which has moderate spatial detail and frequent temporal coverage. This limitation arises from the challenge of combining coarse- and fine-spatial-resolution data, due to their large spatial resolution gap. This study presents a novel model, named multi-scale convolutional neural network for spatiotemporal fusion (MSCSTF), to generate MODIS-like data by addressing the large spatial-scale gap in blending the Advanced Very-High-Resolution Radiometer (AVHRR) and Landsat images. To mitigate the considerable biases between AVHRR and Landsat with MODIS images, an image correction module is included into the model using deep supervision. The outcomes show that the modeled MODIS-like images are consistent with the observed ones in five tested areas, as evidenced by the root mean square errors (RMSE) of 0.030, 0.022, 0.075, 0.036, and 0.045, respectively. The model makes reasonable predictions on reconstructing retrospective MODIS-like data when evaluating against Landsat data. The proposed MSCSTF model outperforms six other comparative models in accuracy, with regional average RMSE values being lower by 0.005, 0.007, 0.073, 0.062, 0.070, and 0.060, respectively, compared to the counterparts in the other models. The developed method does not rely on MODIS images as input, and it has the potential to reconstruct MODIS-like data prior to 2000 for retrospective studies and applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Evaluating maize phenotype dynamics under drought stress using terrestrial lidar
- Author
-
Su, Yanjun, Wu, Fangfang, Ao, Zurui, Jin, Shichao, Qin, Feng, Liu, Boxin, Pang, Shuxin, Liu, Lingli, and Guo, Qinghua
- Published
- 2019
- Full Text
- View/download PDF
7. An Adaptive Multiscale Generative Adversarial Network for the Spatiotemporal Fusion of Landsat and MODIS Data.
- Author
-
Pan, Xiaoyu, Deng, Muyuan, Ao, Zurui, and Xin, Qinchuan
- Subjects
GENERATIVE adversarial networks ,LANDSAT satellites ,PROBABILISTIC generative models ,SCALE-free network (Statistical physics) ,REMOTE sensing ,ORBITAL velocity ,HIGH resolution imaging - Abstract
The monitoring of rapidly changing land surface processes requires remote sensing images with high spatiotemporal resolution. As remote sensing satellites have different satellite orbits, satellite orbital velocities, and sensors, it is challenging to acquire remote sensing images with high resolution and dense time series within a reasonable temporal interval. Remote sensing spatiotemporal fusion is one of the effective ways to acquire high-resolution images with long time series. Most of the existing STF methods use artificially specified fusion strategies, resulting in blurry images and poor generalization ability. Additionally, some methods lack continuous time change information, leading to poor performance in capturing sharp changes in land covers. In this paper, we propose an adaptive multiscale network for spatiotemporal fusion (AMS-STF) based on a generative adversarial network (GAN). AMS-STF reconstructs high-resolution images by leveraging the temporal and spatial features of the input data through multiple adaptive modules and multiscale features. In AMS-STF, for the first time, deformable convolution is used for the STF task to solve the shape adaptation problem, allowing for adaptive adjustment of the convolution kernel based on the different shapes and types of land use. Additionally, an adaptive attention module is introduced in the networks to enhance the ability to perceive temporal changes. We conducted experiments comparing AMS-STF to the most widely used and innovative models currently available on three Landsat-MODIS datasets, as well as ablation experiments to evaluate some innovative modules. The results demonstrate that the adaptive modules significantly improve the fusion effect of land covers and enhance the clarity of their boundaries, which proves the effectiveness of AMS-STF. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. A global long-term, high-resolution satellite radar backscatter data record (1992–2022+): merging C-band ERS/ASCAT and Ku-band QSCAT.
- Author
-
Tao, Shengli, Ao, Zurui, Wigneron, Jean-Pierre, Saatchi, Sassan, Ciais, Philippe, Chave, Jérôme, Le Toan, Thuy, Frison, Pierre-Louis, Hu, Xiaomei, Chen, Chi, Fan, Lei, Wang, Mengjia, Zhu, Jiangling, Zhao, Xia, Li, Xiaojun, Liu, Xiangzhuo, Su, Yanjun, Hu, Tianyu, Guo, Qinghua, and Wang, Zhiheng
- Subjects
- *
PIXELS , *SNOW accumulation , *BACKSCATTERING , *RADAR , *STANDARD deviations , *SOIL moisture , *REMOTE sensing , *EARTH sciences - Abstract
Satellite radar backscatter contains unique information on land surface moisture, vegetation features, and surface roughness and has thus been used in a range of Earth science disciplines. However, there is no single global radar data set that has a relatively long wavelength and a decades-long time span. We here provide the first long-term (since 1992), high-resolution (∼8.9 km instead of the commonly used ∼25 km resolution) monthly satellite radar backscatter data set over global land areas, called the long-term, high-resolution scatterometer (LHScat) data set, by fusing signals from the European Remote Sensing satellite (ERS; 1992–2001; C-band; 5.3 GHz), Quick Scatterometer (QSCAT, 1999–2009; Ku-band; 13.4 GHz), and the Advanced SCATterometer (ASCAT; since 2007; C-band; 5.255 GHz). The 6-year data gap between C-band ERS and ASCAT was filled by modelling a substitute C-band signal during 1999–2009 from Ku-band QSCAT signals and climatic information. To this end, we first rescaled the signals from different sensors, pixel by pixel. We then corrected the monthly signal differences between the C-band and the scaled Ku-band signals by modelling the signal differences from climatic variables (i.e. monthly precipitation, skin temperature, and snow depth) using decision tree regression. The quality of the merged radar signal was assessed by computing the Pearson r , root mean square error (RMSE), and relative RMSE (rRMSE) between the C-band and the corrected Ku-band signals in the overlapping years (1999–2001 and 2007–2009). We obtained high Pearson r values and low RMSE values at both the regional (r≥0.92 , RMSE ≤ 0.11 dB, and rRMSE ≤ 0.38) and pixel levels (median r across pixels ≥ 0.64, median RMSE ≤ 0.34 dB, and median rRMSE ≤ 0.88), suggesting high accuracy for the data-merging procedure. The merged radar signals were then validated against the European Space Agency (ESA) ERS-2 data, which provide observations for a subset of global pixels until 2011, even after the failure of on-board gyroscopes in 2001. We found highly concordant monthly dynamics between the merged radar signals and the ESA ERS-2 signals, with regional Pearson r values ranging from 0.79 to 0.98. These results showed that our merged radar data have a consistent C-band signal dynamic. The LHScat data set (10.6084/m9.figshare.20407857; Tao et al., 2023) is expected to advance our understanding of the long-term changes in, e.g., global vegetation and soil moisture with a high spatial resolution. The data set will be updated on a regular basis to include the latest images acquired by ASCAT and to include even higher spatial and temporal resolutions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. A novel ground surface subsidence prediction model for sub-critical mining in the geological condition of a thick alluvium layer
- Author
-
Chang, Zhanqiang, Wang, Jinzhuang, Chen, Mi, Ao, Zurui, and Yao, Qi
- Published
- 2015
- Full Text
- View/download PDF
10. Toward 30 m Fine-Resolution Land Surface Phenology Mapping at a Large Scale Using Spatiotemporal Fusion of MODIS and Landsat Data.
- Author
-
Ruan, Yongjian, Ruan, Baozhen, Zhang, Xinchang, Ao, Zurui, Xin, Qinchuan, Sun, Ying, and Jing, Fengrui
- Abstract
Satellite-retrieved land surface phenology (LSP) is a first-order control on terrestrial ecosystem productivity, which is critical for monitoring the ecological environment and human and social sustainable development. However, mapping large-scale LSP at a 30 m resolution remains challenging due to the lack of dense time series images with a fine resolution and the difficulty in processing large volumes of data. In this paper, we proposed a framework to extract fine-resolution LSP across the conterminous United States using the supercomputer Tianhe-2. The proposed framework comprised two steps: (1) generation of the dense two-band enhanced vegetation index (EVI2) time series with a fine resolution via the spatiotemporal fusion of MODIS and Landsat images using ESTARFM, and (2) extraction of the long-term and fine-resolution LSP using the fused EVI2 dataset. We obtained six methods (i.e., AT, FOD, SOD, RCR, TOD and CCR) of fine-resolution LSP with the proposed framework, and evaluated its performance at both the site and regional scales. Comparing with PhenoCam-observed phenology, the start of season (SOS) derived from the fusion data using six methods of AT, FOD, SOD, RCR, TOD and CCR obtained r values of 0.43, 0.44, 0.41, 0.29, 0.46 and 0.52, respectively, and RMSE values of 30.9, 28.9, 32.2, 37.9, 37.8 and 33.2, respectively. The satellite-retrieved end of season (EOS) using six methods of AT, FOD, SOD, RCR, TOD and CCR obtained r values of 0.68, 0.58, 0.68, 0.73, 0.65 and 0.56, respectively, and RMSE values of 51.1, 53.6, 50.5, 44.9, 51.8 and 54.6, respectively. Comparing with the MCD12Q2 phenology, the satellite-retrieved 30 m fine-resolution LSP of the proposed framework can obtain more information on the land surface, such as rivers, ridges and valleys, which is valuable for phenology-related studies. The proposed framework can yield robust fine-resolution LSP at a large-scale, and the results have great potential for application into studies addressing problems in the ecological environmental at a large scale. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Gap-Filling and Missing Information Recovery for Time Series of MODIS Data Using Deep Learning-Based Methods.
- Author
-
Wang, Yidan, Zhou, Xuewen, Ao, Zurui, Xiao, Kun, Yan, Chenxi, and Xin, Qinchuan
- Subjects
TIME series analysis ,ARTIFICIAL neural networks ,MODIS (Spectroradiometer) ,DEEP learning ,IMAGE reconstruction ,REMOTE sensing - Abstract
Sensors onboard satellite platforms with short revisiting periods acquire frequent earth observation data. One limitation to the utility of satellite-based data is missing information in the time series of images due to cloud contamination and sensor malfunction. Most studies on gap-filling and cloud removal process individual images, and existing multi-temporal image restoration methods still have problems in dealing with images that have large areas with frequent cloud contamination. Considering these issues, we proposed a deep learning-based method named content-sequence-texture generation (CSTG) network to generate gap-filled time series of images. The method uses deep neural networks to restore remote sensing images with missing information by accounting for image contents, textures and temporal sequences. We designed a content generation network to preliminarily fill in the missing parts and a sequence-texture generation network to optimize the gap-filling outputs. We used time series of Moderate-resolution Imaging Spectroradiometer (MODIS) data in different regions, which include various surface characteristics in North America, Europe and Asia to train and test the proposed model. Compared to the reference images, the CSTG achieved structural similarity (SSIM) of 0.953 and mean absolute errors (MAE) of 0.016 on average for the restored time series of images in artificial experiments. The developed method could restore time series of images with detailed texture and generally performed better than the other comparative methods, especially with large or overlapped missing areas in time series. Our study provides an available method to gap-fill time series of remote sensing images and highlights the power of the deep learning methods in reconstructing remote sensing images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Deep Learning-Based Spatiotemporal Data Fusion Using a Patch-to-Pixel Mapping Strategy and Model Comparisons.
- Author
-
Ao, Zurui, Sun, Ying, Pan, Xiaoyu, and Xin, Qinchuan
- Subjects
- *
MULTISENSOR data fusion , *CONVOLUTIONAL neural networks , *DEEP learning , *REMOTE sensing , *LAND cover , *TEXTURE analysis (Image processing) , *IMAGE fusion - Abstract
Tradeoffs among the spatial, spectral, and temporal resolutions of satellite sensors make it difficult to acquire remote sensing images at both high spatial and high temporal resolutions from an individual sensor. Studies have developed methods to fuse spatiotemporal data from different satellite sensors, and these methods often assume linear changes in surface reflectance across time and adopt empirical rules and handcrafted features. Here, we propose a dense spatiotemporal fusion (DenseSTF) network based on the convolutional neural network (CNN) to deal with these problems. DenseSTF uses a patch-to-pixel modeling strategy that can provide abundant texture details for each pixel in the target fine image to handle heterogeneous landscapes and models both forward and backward temporal dependencies to account for land cover changes. Moreover, DenseSTF adopts a mapping function with few assumptions and empirical rules, which allows for establishing reliable relationships between the coarse and fine images. We tested DenseSTF in three contrast scenes with different degrees of heterogeneity and temporal changes, and made comparisons with three rule-based fusion approaches and three CNNs. Experimental results indicate that DenseSTF can provide accurate fusion results and outperform the other tested methods, especially when the land cover changes abruptly. The structure of the deep learning networks largely impacts the success of data fusion. Our study developed a novel approach based on CNN using a patch-to-pixel mapping strategy and highlighted the effectiveness of the deep learning networks in the spatiotemporal fusion of the remote sensing data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. A method for quality management of vegetation phenophases derived from satellite remote sensing data.
- Author
-
Ruan, Yongjian, Zhang, Xinchang, Xin, Qinchuan, Sun, Ying, Ao, Zurui, and Jiang, Xin
- Subjects
TOTAL quality management ,REMOTE sensing ,VEGETATION management ,REMOTE-sensing images ,VEGETATION monitoring ,PEARSON correlation (Statistics) - Abstract
Remote sensing has become an important technique for monitoring vegetation phenology. The quality of remote-sensing images and derived products is key to successful extraction of vegetation phenophases. There is a need to develop quality management methods to evaluate the data uncertainty and assist the removal of the noises. This paper developed a shape quality assurance score threshold (SQAT) method which accounts for the trend in satellite-derived vegetation index associated with the process of vegetation growth. The proposed method was tested on six widely used methods for extracting vegetation phenophases. Results showed that the SQAT method can effectively identify noises in the vegetation index time series and improve the accuracies of estimated start of season (SOS) and end of season (EOS) of the six methods. After removal of identified noises, the Pearson correlation coefficient (r) averagely increased by 8% for SOS, and 11% for EOS. Regression analyses of vegetation phenophases between the PhenoCam observations and MCD12Q2 product showed that the proposed method performs better than the QA score of MCD12Q2 for quality management. This paper provides promising method for quality management; it has the potential to reduce the uncertainty of the vegetation index time series that can support studies of vegetation phenology monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Constructing 10-m NDVI Time Series From Landsat 8 and Sentinel 2 Images Using Convolutional Neural Networks.
- Author
-
Ao, Zurui, Sun, Ying, and Xin, Qinchuan
- Abstract
Normalized difference vegetation index (NDVI) carries valuable information related to the photosynthetic activity of vegetation and is essential for monitoring phenological changes and ecosystem dynamics. The medium to high spatial resolution satellite images from Landsat 8 and Sentinel 2 offer opportunities to generate dense NDVI time series at 10-m resolution to improve our understanding of the land surface processes. However, synergistic use of Landsat 8 and Sentinel 2 for generating frequent and consistent NDVI data remains challenging as they have different spatial resolutions and spectral response functions. In this letter, we developed an attentional super resolution convolutional neural network (ASRCNN) for producing 10-m NDVI time series through fusion of Landsat 8 and Sentinel 2 images. We evaluated its performance in two heterogeneous areas. Quantitative assessments indicated that the developed network outperforms five commonly used fusion methods [i.e., enhanced deep convolutional spatiotemporal fusion network (EDCSTFN), super resolution convolutional neural network (SRCNN), spatial and temporal adaptive reflectance fusion model (STARFM), enhanced STARFM (ESTARFM), and flexible spatiotemporal data fusion (FSDAF)]. The influence of the method selection on the fusion accuracy is much greater than that of the fusion strategy in blending Landsat–Sentinel NDVI. Our results illustrate the advantages and potentials of the deep learning approaches on satellite data fusion. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. A Semiprognostic Phenology Model for Simulating Multidecadal Dynamics of Global Vegetation Leaf Area Index.
- Author
-
Xin, Qinchuan, Zhou, Xuewen, Wei, Nan, Yuan, Hua, Ao, Zurui, and Dai, Yongjiu
- Subjects
LEAF area index ,VEGETATION dynamics ,PLANT phenology ,PHENOLOGY ,BROADLEAF forests ,PLANT products - Abstract
Vegetation leaf phenology, often reflected by the dynamics in leaf area index (LAI), influences a variety of land surface processes. Robust models of vegetation phenology are pivot components in both land surface models and dynamic global vegetation models but remain challenging in terms of the model accuracy. This study develops a semiprognostic phenology model that is suitable for simulating time series of vegetation LAI. This method establishes a linear relationship between the steady‐state LAI (i.e., the LAI when the environment conditions remain unchanging) and gross primary productivity, meaning that the LAI an unchanging environment can carry is proportional to the photosynthetic products produced by plant leaves and implements with a simple light use efficiency algorithm of MOD17 to form a closed set of equations. We derive an analytical solution based on the Lambert W function to the closed equations and then apply a simple restricted growth process model to simulate the time series of actual LAI. The results modeled using global climate data demonstrate that the model is able to capture both the spatial pattern and intra‐annual and interannual variation of LAI derived from the satellite‐based product on a global scale. The results modeled using the flux tower data suggest that the developed model is able to explain over 70% variation in daily LAI for each plant functional type except evergreen broadleaf forest. The developed semiprognostic approach provides a simple solution to modeling the spatiotemporal variation in vegetation LAI across plant functional types on the global scale. Plain Language Summary: Modeling vegetation leaf phenology remains challenging in terms of the model accuracy. We develop a semiprognostic vegetation phenology method that is suitable for simulating the variation in the time series of leaf area index. This approach establishes a linear relationship between LAI and vegetation productivity and implements with a simple light use efficiency algorithm of MOD17. The results modeled using global climate data demonstrate that the model is able to capture both the spatial pattern and intra‐annual and interannual variation of LAI derived from the remote sensing data on the global scale. Key Points: We developed a semiprognostic phenology model for simulating the dynamics of vegetation leaf area index on the global scaleThe novel model is able to capture the spatiotemporal variation of global vegetation leaf area index as compared with observationsThe developed approach provides a simple solution to modeling multidecadal variation of global vegetation leaf area index [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Enhanced Vegetation Growth in the Urban Environment Across 32 Cities in the Northern Hemisphere.
- Author
-
Ruan, Yongjian, Zhang, Xinchang, Xin, Qinchuan, Ao, Zurui, and Sun, Ying
- Subjects
GROWING season ,PHENOLOGY ,URBAN ecology (Sociology) ,PLANT growth ,REMOTE sensing - Abstract
How the urban environment influences vegetation phenology is important to understand the living environment and the climate‐vegetation interaction. This study investigates changes in vegetation phenology in the urban environment using remote sensing data for 32 major cities in the Northern Hemisphere. Vegetation phenological information for both urban and rural areas of each individual city was derived from the remote sensing data. We found that the urban environment generally enhanced vegetation growth but at varied degrees for different regions in the Northern Hemisphere. Vegetation phenology metrics, including the start of season (SOS), the end of season (EOS), and the growing season length (GSL), have large differences between urban and rural environment. Vegetation SOS in urban areas occurred earlier than in rural areas for 22 in 32 cities, and the relationships between advanced urban SOS and the distance away from urban centers are significant for 9 cities. Vegetation EOS in urban areas occurred later than in rural areas for 19 in 32 cities, and the relationships between delayed urban EOS and the distance away from urban centers are significant for 10 cities. The response of SOS to the urban environment is found dependent upon the latitude of urban centers and urban spring daytime and nighttime temperatures. Analysis on ground observational records of vegetation phenology in the region of North America supports the findings derived from remote sensing data. These findings could help to understand the impacts of the urban environment on vegetation growth. Key Points: We investigate vegetation phenology variation for 32 major cities in the Northern HemisphereVegetation phenological metrics have large differences between urban and rural environmentVegetation growth is enhanced in the urban environment [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. Automated Surface Water Extraction Combining Sentinel-2 Imagery and OpenStreetMap Using Presence and Background Learning (PBL) Algorithm.
- Author
-
Zhang, Zhiqiang, Zhang, Xinchang, Jiang, Xin, Xin, Qinchuan, Ao, Zurui, Zuo, Qiting, and Chen, Liyan
- Abstract
Surface water bodies play important roles in socioeconomic development and ecosystem balance. The fast-developing technology of remote sensing offers opportunities for the automatic extraction and dynamic monitoring of surface water bodies. Compared with other medium- and low-spatial-resolution remote sensing images, such as Landsat and MODIS, Sentinel-2 imagery provides higher spatial resolution and revisit frequency, making it more suitable for surface water extraction. The existing research works on surface water extraction using Sentinel-2 imagery remain focusing on the construction of water indexes, which is easily affected by shadows and built-up areas. In this study, we propose an automated surface water extraction method based on the presence and background learning algorithm (ASWE-PBL) using Sentinel-2 imagery and OpenStreetMap (OSM) data. The OSM data are used as the auxiliary data to automatically select water samples, and the PBL algorithm is adopted to predict the water presence probability. ASWE-PBL is validated using six typical study areas in China, and the modified normalized difference water index, the automated water extraction index, and the random forest classifier are employed for comparison. Moreover, feature optimization, parameter sensitivity, computation cost, and future work are analyzed and discussed. The experimental results show that ASWE-PBL can effectively suppress noise caused by shadows and built-up areas and can obtain the highest kappa coefficient for five study areas but not for Guangzhou and that the ten-spectral-band composite of Sentinel-2 imagery is a better feature combination scheme than that of spectral and water indexes. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Efficient mitigation of atmospheric phase effects in repeat-pass InSAR measurements.
- Author
-
Chang, Zhanqiang, Liu, Xiaomeng, Luo, Yi, Ao, Zurui, Yao, Qi, Zhao, Chao, and Wan, Xiangxing
- Subjects
ATMOSPHERIC effects on remote sensing ,SYNTHETIC aperture radar ,METEOROLOGICAL stations ,INTERFEROMETRY ,DEFORMATION of surfaces ,MODIS (Spectroradiometer) - Abstract
The problem of atmospheric phase effects is currently one of the most important limiting factors for widespread application of repeat-pass interferometric synthetic aperture radar (InSAR) measurements. Due to the extraordinary complexity of the atmospheric inhomogeneity and turbulence, it is generally difficult to obtain satisfactory mitigation of the atmospheric phase effects in repeat-pass InSAR measurements. In recent years, several methods have been developed for mitigating the atmospheric phase effects. An effective approach is interferogram stacking, which is based on stacking independent interferograms. However, as many as 2nimages are required to generateninterferograms and the atmospheric delay errors of the stacked interferogram decrease only with the square root of the number of interferograms in the conventional interferogram stacking method, which is not very efficient. In order to efficiently mitigate the atmospheric phase effects on the stacked interferogram in repeat-pass InSAR measurements, we propose a relay-interferogram stacking method. Compared with the conventional method, this method not only can efficiently mitigate atmospheric phase effects on the stacked interferogram, but also greatly decreases the number of required synthetic aperture radar (SAR) images. The key element is that the first and the last SAR images are selected from the periods of similar meteorological conditions. In addition, we present an application of the approach to the study of ground subsidence in the area around Beijing, China. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
19. Decadal Lake Volume Changes (2003–2020) and Driving Forces at a Global Scale.
- Author
-
Feng, Yuhao, Zhang, Heng, Tao, Shengli, Ao, Zurui, Song, Chunqiao, Chave, Jérôme, Le Toan, Thuy, Xue, Baolin, Zhu, Jiangling, Pan, Jiamin, Wang, Shaopeng, Tang, Zhiyao, and Fang, Jingyun
- Subjects
STANDARD deviations ,LAKES - Abstract
Lakes play a key role in the global water cycle, providing essential water resources and ecosystem services for humans and wildlife. Quantifying long-term changes in lake volume at a global scale is therefore important to the sustainability of humanity and natural ecosystems. Yet, such an estimate is still unavailable because, unlike lake area, lake volume is three-dimensional, challenging to be estimated consistently across space and time. Here, taking advantage of recent advances in remote sensing technology, especially NASA's ICESat-2 satellite laser altimeter launched in 2018, we generated monthly volume series from 2003 to 2020 for 9065 lakes worldwide with an area ≥ 10 km
2 . We found that the total volume of the 9065 lakes increased by 597 km3 (90% confidence interval 239–2618 km3 ). Validation against in situ measurements showed a correlation coefficient of 0.98, an RMSE (i.e., root mean square error) of 0.57 km3 and a normalized RMSE of 2.6%. In addition, 6753 (74.5%) of the lakes showed an increasing trend in lake volume and were spatially clustered into nine hot spots, most of which are located in sparsely populated high latitudes and the Tibetan Plateau; 2323 (25.5%) of the lakes showed a decreasing trend in lake volume and were clustered into six hot spots—most located in the world's arid/semi-arid regions where lakes are scarce, but population density is high. Our results uncovered, from a three-dimensional volumetric perspective, spatially uneven lake changes that aggravate the conflict between human demands and lake resources. The situation is likely to intensify given projected higher temperatures in glacier-covered regions and drier climates in arid/semi-arid areas. The 15 hot spots could serve as a blueprint for prioritizing future lake research and conservation efforts. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
20. Impacts of Rapid Socioeconomic Development on Cropping Intensity Dynamics in China during 2001–2016.
- Author
-
Li, Le, Ao, Zurui, Zhao, Yaolong, and Liu, Xulong
- Subjects
- *
CROP development , *ARABLE land , *CROP yields , *LAND use , *LAND resource - Abstract
Changes in cropping intensity reflect not only changes in land use but also the transformation of land functions. Although both natural conditions and socioeconomic factors can influence the spatial distribution of the cropping intensity and its changes, socioeconomic developments related to human activities can exert great impacts on short term cropping intensity changes. The driving force of this change has a high level of uncertainty; and few researchers have implemented comprehensive studies on the underlying driving forces and mechanisms of these changes. This study produced cropping intensity maps in China from 2001 to 2016 using remote sensing data and analyzed the impacts of socioeconomic drivers on cropping intensity and its changes in nine major agricultural zones in China. We found that the average annual cropping intensity in all nine agricultural zones increased from 2001 to 2016 under rapid socioeconomic development, and the trends in the seven major agricultural zones were significantly increased (p < 0.05), based on a Mann–Kendall test, except for the Northeast China Plain (NE Plain) and Qinghai Tibet Plateau (QT Plateau). Based on the results from the Geo-Detector, a widely used geospatial analysis tool, the dominant factors that affected cropping intensity distribution were related to the arable land output in the plain regions and topography in the mountainous regions. The factors that affected cropping intensity changes were mainly related to the arable land area and crop yields in northern China, and regional economic developments, such as machinery power input and farmers' income in southern China. These findings provide useful cropping intensity data and profound insights for policymaking on how to use cultivated land resources efficiently and sustainably. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. Deep Learning Approaches for the Mapping of Tree Species Diversity in a Tropical Wetland Using Airborne LiDAR and High-Spatial-Resolution Remote Sensing Images.
- Author
-
Sun, Ying, Huang, Jianfeng, Ao, Zurui, Lao, Dazhao, and Xin, Qinchuan
- Subjects
DEEP learning ,SPECIES diversity ,LASER based sensors ,MULTISPECTRAL imaging ,REMOTE sensing ,LIDAR ,FORESTED wetlands ,SPECTRAL imaging - Abstract
The monitoring of tree species diversity is important for forest or wetland ecosystem service maintenance or resource management. Remote sensing is an efficient alternative to traditional field work to map tree species diversity over large areas. Previous studies have used light detection and ranging (LiDAR) and imaging spectroscopy (hyperspectral or multispectral remote sensing) for species richness prediction. The recent development of very high spatial resolution (VHR) RGB images has enabled detailed characterization of canopies and forest structures. In this study, we developed a three-step workflow for mapping tree species diversity, the aim of which was to increase knowledge of tree species diversity assessment using deep learning in a tropical wetland (Haizhu Wetland) in South China based on VHR-RGB images and LiDAR points. Firstly, individual trees were detected based on a canopy height model (CHM, derived from LiDAR points) by the local-maxima-based method in the FUSION software (Version 3.70, Seattle, USA). Then, tree species at the individual tree level were identified via a patch-based image input method, which cropped the RGB images into small patches (the individually detected trees) based on the tree apexes detected. Three different deep learning methods (i.e., AlexNet, VGG16, and ResNet50) were modified to classify the tree species, as they can make good use of the spatial context information. Finally, four diversity indices, namely, the Margalef richness index, the Shannon–Wiener diversity index, the Simpson diversity index, and the Pielou evenness index, were calculated from the fixed subset with a size of 30 × 30 m for assessment. In the classification phase, VGG16 had the best performance, with an overall accuracy of 73.25% for 18 tree species. Based on the classification results, mapping of tree species diversity showed reasonable agreement with field survey data (R
2 Margalef = 0.4562, root-mean-square error RMSEMargalef = 0.5629; R2 Shannon–Wiener = 0.7948, RMSEShannon–Wiener = 0.7202; R2 Simpson = 0.7907, RMSESimpson = 0.1038; and R2 Pielou = 0.5875, RMSEPielou = 0.3053). While challenges remain for individual tree detection and species classification, the deep-learning-based solution shows potential for mapping tree species diversity. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.