6,134 results
Search Results
352. A review of “ Regional Satellite Oceanography ” by S. V. Victorov ( London: Taylor & Francis) 1996. [ Pp. 306] £ 24-95 (paper), £ 44·95 hardback.
- Author
-
Cracknell, Arthur P.
- Published
- 1996
- Full Text
- View/download PDF
353. PMT gain self-adjustment system for high-accuracy echo signal detection.
- Author
-
Zhou, Guoqing, Xu, Chao, Zhang, Haotian, Zhou, Xiang, Zhao, Dawei, Wu, Gongbei, Lin, Jinchun, Liu, Zhexian, Yang, Jiazhi, Nong, Xueqin, and Zhang, Lieping
- Subjects
- *
SIGNAL detection , *FIELD programmable gate arrays , *SIGNAL generators , *PHOTOMULTIPLIERS , *WATER depth , *VOLTAGE control - Abstract
The intensity difference between the echo signals from the water surface and bottom during bathymetry LiDAR operation requires a photomultiplier tube (PMT) gain self-adjustment. Otherwise, noise echo signals are collected, while weak and useful signals are not. For this reason, this paper proposes a PMT gain self-adjustment system for the high-accuracy detection of LiDAR echo signals. The developed system uses a field programmable gate array (FPGA) collector as a feedback signal generator, an STM32 controller as the PMT gain and voltage control signal generator and a DA module as the PMT gain voltage conversion circuit. The system controls the PMT gain voltage by judging the feedback signal to achieve PMT gain self-adjustment. The system was verified in an indoor tank, building roof, and outdoor pond experiments. By comparing the experiments, the developed system is shown to detect laser energy intensity with a sensitivity of at least 2.26 times stronger than the traditional system, and can measure water depth at least 2.5 times deeper than the traditional system. Therefore, it can be concluded that the proposed PMT gain self-adjusting system can effectively adapt to the changes in laser energy, improve the measurement of water depth, control the amplitude of the echo signals, increase the accuracy of water depth detection, reduce the saturation of the PMT detector, and protect PMT from damages. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
354. Crop-specific hyperspectral band selection method using limited ground-truth data.
- Author
-
Dave, Kinjal and Trivedi, Y N
- Subjects
- *
SUPPORT vector machines , *COMPUTATIONAL complexity , *GOVERNMENT aid - Abstract
Hyperspectral imaging plays a significant role in crop classification and aims to separate various crop pixels from the imagery. It aids the government in deciding agricultural policies. However, high spectral dimensions in hyperspectral data require high computing power and time. This paper presents a new band selection method based on spectral information divergence and correlation ( S I D C o r r L ), which selects optimum bands to classify the crops. The S I D C o r r L requires only single ground-truth pixels having the least spectral information divergence value to select the bands. This method requires not only minimum ground truth data but also gives reduced computational complexity. We have evaluated the proposed method on three hyperspectral datasets, AVIRIS-NG, Indian Pines and Salinas. We have used overall accuracy and kappa coefficient as performance parameters from the support vector machine and k-nearest neighbours classifiers. The experimental findings reveal that the proposed band selection method achieves maximum overall accuracy of about 84.79% for Indian Pines and 93.08% for Salinas dataset. The proposed methodology exhibits an improvement in overall accuracy when the number of selected bands ranges from 35 to 50 when compared with the other competitive band selection approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
355. Contrastive learning–based structure preserving projection for hyperspectral images.
- Author
-
Zhao, Siyu, Zhang, Hongjie, Gong, Bo, Jing, Ling, and Chen, Yingyi
- Subjects
- *
FEATURE extraction , *SUPERVISED learning , *SPECTRAL imaging - Abstract
Unsupervised feature extraction methods have been widely applied to remove the huge amount of redundancy in hyperspectral images due to their effectiveness when the label information of samples is unreachable. However, because of the lack of label information, unsupervised feature extraction methods are deficient in the discriminant ability compared to supervised methods. When the number of samples is small, the effect of dimension reduction is usually not good enough. To address the problems, an unsupervised structure preserving projection method named contrastive learning based sparsity preserving projection (CL-SPP) is proposed in this paper. Firstly, CL-SPP increases the discriminant ability of samples by introducing the concept of positive and negative pairs, and adjusts the number of positive and negative pairs in the training set through a parameter. Then, by minimizing the contrastive loss function, CL-SPP makes the positive pairs more similar and the negative pairs less similar after projection. Moreover, the proposed contrastive learning-based method is also extended to the supervised case, as well as a general graph embedding model framework based on comparative learning. Experiments on three hyperspectral images demonstrate that the proposed methods have a better performance than related approaches. More impressively, the effect of CL-SPP is comparable to its supervised version. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
356. A new method for retrieving urban heat island intensity using GNSS-derived ZTD and atmospheric empirical model.
- Author
-
He, Qimin, Li, Li, Lian, Dajun, Yu, Hang, Chen, Guodong, Gao, Biqing, Wang, Rui, Song, Kangming, and Li, Long
- Subjects
- *
URBAN heat islands , *ATMOSPHERIC models , *GLOBAL Positioning System , *URBAN climatology , *WATER vapor , *TROPOSPHERIC aerosols - Abstract
The urban heat island (UHI) effect is one of the typical characteristics of urban climate, which makes a lot of harmful gases accumulate and cannot be diffused. The UHI effect will cause air pollution, affecting the health and living conditions of humans. In this paper, a new method for calculating UHI intensity (UHII) by using the Global Navigation Satellite Systems (GNSS) derived zenith tropospheric delay (GNSS-ZTD) dataset and atmospheric empirical model was proposed and tested by using UHII observations from ground meteorological sensors for monitoring UHII in the Hong Kong city. The method requires accurate GNSS-ZTD datasets, priori atmospheric parameters (i.e. ground pressure and water vapour partial pressure) and the coefficient of the refractive function as inputs. The GNSS-ZTD datasets were obtained by using observations from the Hong Kong GNSS continuously operating reference stations network, and priori atmospheric parameters were obtained from a prior atmospheric model based on ERA5 (fifth-generation reanalysis dataset of the European Centre for Medium-range Weather Forecasting) dataset and the coefficient of the refractive function can be obtained from the historical radiosondes data. The ground refractivity, pressure and water vapour partial pressure can be estimated by using the above-mentioned datasets, then the time series of UHII can be obtained by using a refractivity model proposed by Smith and Weintraub (SW-UHII), or Thayer (Thayer-UHII model). The results showed that the accuracy of the SW-UHII is better than the accuracy of the Thayer-UHII, and the root mean square (RMS) of the SW-UHII is 14% lower than the RMS of the Thayer-UHII. The mean bias, RMS and standard deviation of the SW-UHII compared with meteorological data derived UHII over GNSS stations are 1.82°C, 1.9°C and 1.41°C in all seasons, respectively, and the largest RMS is in winter (day of year 1–61 and 331–365). All the above results indicated that the GNSS technique has a certain prospect for monitoring UHII. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
357. Sea ice image classification based on ResFPG network and heterogeneous data fusion.
- Author
-
Han, Yanling, Shi, Xi, Wang, Jing, Zhang, Yun, Hong, Zhonghua, Ma, Zhenling, and Zhou, Ruyan
- Subjects
- *
SEA ice , *MULTISENSOR data fusion , *SYNTHETIC aperture radar , *OPTICAL radar , *SYNTHETIC apertures , *DEEP learning , *FEATURE extraction , *PASSIVE optical networks , *MULTISPECTRAL imaging - Abstract
Sea ice detection has played an important role in climate protection and strategic deployment. Due to the autonomous learning characteristics of deep learning, it has been gradually applied to the classification of remote sensing sea ice images. At present, deep learning models are mostly used in sea ice classification of single source remote sensing data. Due to the limitations of single source data and the information loss of deep learning model in the process of feature extraction layer by layer, it is inevitable to encounter a bottleneck in sea ice detection requiring fine classification. To solve the above problems, this paper proposes a sea ice image classification method based on ResNet16-feature pyramid networks-spatial pyramid pooling-gated fusion network (ResFPG) and heterogeneous data fusion. In the feature extraction part, the method uses the improved ResNet16 to extract the multi-level feature information of sea ice in synthetic aperture radar data and optical data, reducing information loss in the feature extraction process, then mines and fuses the low-level spatial information and high-level semantic information through the improved feature pyramid networks (FPN), and then collects and fuses the output features of different scales through the spatial pyramid pooling (SPP) network. In the feature fusion part, a gated feature-level fusion strategy is designed to further improve the overall classification accuracy by adaptively adjusting the feature contribution of two heterogeneous sources of data through the gated fusion network (GFN). In order to verify the effectiveness of this method, we use two sets of heterogeneous sea ice remote sensing data located in Hudson Bay area for experiments. The experimental results show that compared with other image classification methods, the proposed method fully excavates and integrates the multi-scale and multi-level features in heterogeneous data, effectively distinguishes the feature contribution, and achieves better classification results (97.14% and 95.85%). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
358. Open-ended remote sensing visual question answering with transformers.
- Author
-
Al Rahhal, Mohamad M., Bazi, Yakoub, Alsaleh, Sara O., Al-Razgan, Muna, Mekhalfi, Mohamed Lamine, Al Zuair, Mansour, and Alajlan, Naif
- Subjects
- *
QUESTION answering systems , *NATURAL language processing , *VISION - Abstract
Visual question answering (VQA) has been attracting attention in remote sensing very recently. However, the proposed solutions remain rather limited in the sense that the existing VQA datasets address closed-ended question-answer queries, which may not necessarily reflect real open-ended scenarios. In this paper, we propose a new dataset named VQA-TextRS that was built manually with human annotations and considers various forms of open-ended question-answer pairs. Moreover, we propose an encoder-decoder architecture via transformers on account of their self-attention property that allows relational learning of different positions of the same sequence without the need of typical recurrence operations. Thus, we employed vision and natural language processing (NLP) transformers respectively to draw visual and textual cues from the image and respective question. Afterwards, we applied a transformer decoder, which enables the cross-attention mechanism to fuse the earlier two modalities. The fusion vectors correlate with the process of answer generation to produce the final form of the output. We demonstrate that plausible results can be obtained in open-ended VQA. For instance, the proposed architecture scores an accuracy of 84.01% on questions related to the presence of objects in the query images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
359. Automatic registration of optical image and airborne LiDAR data based on centers and corner points of building boundaries.
- Author
-
Wang, Xiangfei, Xu, Bo, Zhao, Shilong, and Li, Xin
- Subjects
- *
AIRBORNE-based remote sensing , *IMAGE registration , *OPTICAL images , *OPTICAL radar , *SERVER farms (Computer network management) , *LIDAR , *DATA fusion (Statistics) - Abstract
Applications based on the fusion of aerial imagery and Light Detection and Ranging (LiDAR) data have attracted more and more interest in recent years. Registration is the crucial prerequisite of data integration because of the spatial shift between two data modalities. However, there are many challenges for registration due to the different mechanisms of LiDAR and camera, especially in complex urban scenes. To address the problem, this paper proposes an automatic and robust registration method by using the centres and corner points of building boundaries as tie points. Primary registration is achieved by using centers of building boundaries as control points. To increase the robustness and accuracy of the registration, corner points of building boundaries are utilized to estimate the transformation model repeatedly. Two datasets of optical imagery and LiDAR data with different scene coverage are used to verify the performance of the proposed method. Experimental results show that the proposed method can achieve the registration accuracy of less than 2 pixels for both cases based on an assessment of check lines. This result can basically meet the requirements of data fusion. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
360. Multi-scale feature learning network with channel self-attention for remote sensing single-image super-resolution.
- Author
-
Wang, Xueqin, Jiang, Wenzong, Zhao, Lifei, Liu, Baodi, and Wang, Yanjiang
- Subjects
- *
REMOTE sensing , *DEEP learning , *HIGH resolution imaging , *SYNTHETIC aperture radar - Abstract
The performance of the remote sensing image super-resolution (SR) task has been significantly improved by applying deep learning. However, previously reported methods usually require sufficient synthetic datasets obtained by bicubic downsampling for long-term training, which can get better results when the input of the test phase is still the bicubic downsampling low-resolution (LR) image. However, when the input does not meet this bicubic downsampling situation, the performance of the model will be significantly degraded, and the generated SR image will be blurred. In order to better adapt to remote sensing image SR in different situations, this paper proposes a multi-scale feature learning network with channel self-attention for remote sensing single-image super-resolution (MSFLCSA). The proposed MSFLCSA does not need any extra synthetic paired dataset but one LR input image for training. Further, MSFLCSA uses a well-designed channel self-attention multi-scale feature learning network to fully learn the repetitive multi-scale features of the input image to realize remote sensing image SR tasks. Specifically, the proposed MSFLCSA extracts features of different levels through multi-column convolutions with different receptive fields to better learn the multi-scale features inside remote sensing images. In addition, the channel self-attention module constructs the channel dependence relationship between channels, selectively emphasizes the interdependent channel features, and further improves the learned multi-scale feature representation. Various types of qualitative and quantitative experiments have validated the effectiveness of MSFLCSA. Compared with advanced technologies, MSFLCSA has achieved superior performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
361. Enhancing performance of multi-temporal tropical river landform classification through downscaling approaches.
- Author
-
Li, Qing, Barrett, Brian, Williams, Richard, Hoey, Trevor, and Boothroyd, Richard
- Subjects
- *
DOWNSCALING (Climatology) , *IMAGE fusion , *REMOTE sensing , *REMOTE-sensing images , *CLASSIFICATION , *MACHINE learning - Abstract
Multi-temporal remote sensing imagery has the potential to classify river landforms to reconstruct the evolutionary trajectory of river morphologies. Whilst open-access archives of high spatial resolution imagery are increasingly available from satellite sensors, such as Sentinel-2, there remains a fundamental challenge of maximising the utility of information in each band whilst maintaining a sufficiently fine resolution to identify landforms. Although image fusion and downscaling methods on Sentinel-2 imagery have been investigated for many years, there is a need to assess their performance for multi-temporal object-based river landform classification. This investigation first compared three downscaling methods: area to point regression kriging (ATPRK), super-resolution based on Sen2Res, and nearest neighbour resampling. We assessed performance of the three downscaling methods by accuracy, precision, recall and F1-score. ATPRK was the optimal downscaling approach, achieving an overall accuracy of 0.861. We successively engaged a set of experiments to determine an optimal training model, exploring single and multi-date scenarios. We find that not only does remote sensing imagery with better quality improve river landform classification performance, but multi-date datasets for establishing machine learning models should be considered for contributing higher classification accuracy. This paper presents a workflow for automated river landform recognition that could be applied to other tropical rivers with similar hydro-geomorphological characteristics. Choice of downscaling approach influences the performance of river landform classification from satellite imagery and should be considered in river and flood management. An efficient and straightforward operating workflow was developed for automated river landform classification with high accuracy that supports an improved understanding of the use of machine learning approaches in river landform recognition. Freely available and easy-to-access remote sensing datasets can help extend the operating workflow to difficult-to-access or remote regions and allow for complete regional and/or national coverage. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
362. A new method to estimate the speed of internal solitary waves based on a single optical remote sensing image.
- Author
-
Liang, Keda, Zhang, Meng, Li, ZhiXin, Yang, Zhonghao, Miao, HongLi, and Wang, Jing
- Subjects
- *
OPTICAL remote sensing , *INTERNAL waves , *MODIS (Spectroradiometer) , *REMOTE-sensing images , *WATER depth , *SPEED - Abstract
It is imperative to estimate the energy of internal solitary waves (ISWs) in the real ocean. The energy of ISW is related to speed and amplitude. It is a problem to obtain the propagation speed of ISW from a single optical remote sensing image. Generally, the nonlinear phase speed (NPS) of ISW can be regarded as the propagation speed of ISW. This paper proposes a new inversion approach for NPS of ISW based on an optical remote sensing image. The simulation platform of optical remote sensing was used to conduct the ISW experiment in the laboratory, and the data were obtained by a series of experiments. Three NPS inversion models of ISW are investigated by support vector regression (SVR), Random forest (RF) and Deep neural network (DNN) based on a single optical remote sensing image. The accuracy of the inversion models was verified by the in-situ data of GF-1 satellite images, GF-4 satellite images and Moderate-resolution Imaging Spectroradiometer (MODIS) images. In the verification results, the SVR inversion model has high accuracy in different sea areas and water depths. The RF and DNN inversion models both have high inversion accuracy in the water depth range of 93–299 m in the South China Sea. Compared with the other two traditional methods for calculating ISW NPS, the SVR inversion model still has the highest accuracy. The results showed that the NPS models of ISW are suitable and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
363. Multi scale feature extraction network with machine learning algorithms for water body extraction from remote sensing images.
- Author
-
Nagaraj, R. and Kumar, Lakshmi Sutha
- Subjects
- *
MACHINE learning , *REMOTE sensing , *BODIES of water , *FEATURE extraction , *CONVOLUTIONAL neural networks , *PIXELS , *MULTISPECTRAL imaging - Abstract
Water Body Extraction (WBE) is a challenging task in remote sensing, owing to the complexity of recognizing surface body objects with rich texture, spatial, spectral, temporal, and radiometric features. The use of spectral indices has shown to be successful in separating surface water from its surroundings at the cost of knowledge of appropriate threshold values. In the absence of knowledge on threshold values, extracting the water from remote sensing data is challenging, which is addressed by several Machine Learning (ML) and Deep Learning (DL) algorithms. However, the effectiveness of both ML and DL classifications is witnessed from visual features to semantic categories at the cost of distinct recognition between the water body and non-water body features. In this paper, a novel Multi Scale Feature Extraction Network (MSFEN) for extracting the pixel-level features from medium resolution remote sensing images is proposed and used traditional ML classifiers to extract the surface water bodies using pixel-level features extracted by MSFEN. The proposed framework is trained and tested on Linear Imaging Self Scanning Sensor – III (LISS-III) multispectral satellite images over major water reservoirs in Tamilnadu, Karnataka, Madhya Pradesh, and Odisha. Experimental results indicate that the proposed model MSFEN+SVM provides accurate extraction results by outperforming the existing state-of-the-art models (Fully Convolutional Network (FCN), Unet, SegNet, Multi Scale Convolutional Neural Network (MSCNN), Deepwater Map, Pyramid Scene Parsing Network (PSPNet), Improved PSPNet, Multi-scale Water Extraction Convolutional Neural Network (MWEN) and Multi-Scale Lake Water Extraction Network (MSLWENet) in terms of performance metrics considered. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
364. JSH-Net: joint semantic segmentation and height estimation using deep convolutional networks from single high-resolution remote sensing imagery.
- Author
-
Zhang, Bin, Wan, Yi, Zhang, Yongjun, and Li, Yansheng
- Subjects
- *
REMOTE sensing , *LAND cover , *THREE-dimensional imaging , *LAND use , *CONVOLUTIONAL neural networks - Abstract
Semantic segmentation for high-resolution remote sensing imagery is a pivotal component of land use and land cover categorization, and height estimation is essential for rebuilding the 3D information of an image. Because of the higher intra-class variation and smaller inter-class dissimilarity, these two challenging tasks are generally treated separately. This paper proposes a fully convolutional network that can tackle these problems simultaneously by estimating the land-cover categories and height values of pixels from a single aerial image. To handle these tasks, we develop a multi-task learning architecture (JSH-Net) that employs a shared feature representation and exploits their potential consistency across tasks, resulting in robust features and better prediction accuracy. Specifically, we propose a novel skip connection module that aggregates the contexts from the encoder part to the decoder part, bridging the semantic gap between them. In addition, we propose a progressive refinement strategy to recover detailed information about the objects. Moreover, we also proposed a height estimation branch on the head of the model to utilize shared features. The experiments we conducted on ISPRS 2D Labelling dataset verified that our network provided precise results of semantic segmentation and height estimation from two output branches and outperformed other state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
365. Parcel-level mapping of apple orchard in smallholder agriculture areas based on feature-level fusion of VHR image and time-series images.
- Author
-
Wang, Haoyu, Wang, Jian, Shen, Zhanfeng, Zhang, Zihan, Li, Junli, Zhao, Lifang, Jiao, Shuhui, Li, Shuo, Lei, Yating, Kou, Wenqi, Li, Jinghan, and Chen, Jingdong
- Subjects
- *
APPLE orchards , *SMALL farms , *IMAGE fusion , *DEEP learning , *RANDOM forest algorithms , *FARMERS - Abstract
Accurate and reliable parcel-level apple orchard mapping is required for many precise agriculture application models, including planting suitability evaluation, standardized production, and personal agricultural operation loan approval. However, in hilly areas where smallholder management predominates, the highly fragmented and heterogeneous agricultural landscape means that fine parcel-level apple orchard mapping remains challenging. This paper proposes a parcel-level apple orchard mapping method based on feature-level spatiotemporal data fusion, which is suitable for hilly areas where smallholder management predominates. First, a hierarchical strategy that simulates human image cognition processing was used to extract redundant candidate parcels from a very high spatial resolution (VHR) image (Google Earth image with a spatial resolution of 0.6 m). Second, deep learning models, including a Depth-wise Asymmetric Bottleneck Network (DABNet) and long short-term memory (LSTM), were used to extract implicit spatial and time series features of the parcels. Third, the implicit features extracted by the deep learning models were formatted into meta-features, which then formed the feature space together with the morphological and geographical features of the parcel. Fourth, based on the constructed parcel feature space, a random forests (RF) model was used to classify candidate parcels. The experiment was carried out in the town of Guanli, southwest of Qixia city, Shandong Province, China: 21,123 apple orchard parcels were extracted from 31,235 candidate parcels. The overall accuracy (OA) of the parcel-level mapping result was 0.919. The parcel features were combined according to their types, and the performance of different feature combinations for parcel classification was further compared, demonstrating that the proposed meta-features had a stronger spatial information description capability than traditional features. Moreover, the mean decrease in the accuracy (MDA) index was used to evaluate the importance of each feature. And spatial-information-related meta-features were revealed to play the most important role in parcel classification. This method provides methodological references for parcel-level orchard mapping in hilly areas where smallholder management predominates and can be applied to improve the monitoring of orchards in such areas. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
366. Lightweight SAR Ship detection and 16 Class Classification using Novel Deep Learning Algorithm with a Hybrid Preprocessing Technique.
- Author
-
Raj J, Anil, Idicula, Sumam Mary, and Paul, Binu
- Subjects
- *
CONVOLUTIONAL neural networks , *MACHINE learning , *CONTAINER ships , *DEEP learning , *OBJECT recognition (Computer vision) , *BLENDED learning , *SHIPS , *IMAGE processing - Abstract
Many studies using deep learning methods for automatic ship detection from SAR images have good detection accuracy. Researchers are mainly focused on classifying large ships with distinct features like tankers, cargo vessels, and container ships. So, more research is needed for the classification of the seen ships into subclasses. Deep learning-based complete full-fledged ship detection and classification is challenging because of the unavailability of SAR data sets with localization details and class information (sub-class details). This paper proposes a ship detection and classification system for classifying the seen ships into 16 classes. In this method, the 2D SAR data are preprocessed to generate 'SarNeDe' data using image processing and despeckling techniques to increase accuracy by reducing false predictions and wrong classifications. This 'SarNeDe' is used to train and test the deep learning model for detection and classification. This model is designed based on a two-stage object detection style without compromising the speed and accuracy. The detection part (L-model), developed mostly using depthwise separable CNNs with multi-scale and multianchor box detection schemes, estimates the accurate position of all ships in the SAR data. These seen ships' SarNeDes data are given as input to the classification part (C-model), based on a one-shot learning-based classification technique built from scratch to classify 16 ship classes. Detection experimental results on the public SAR ship detection data set (SSDD) and Dataset of Ship Detection for Deep Learning under Complex Backgrounds (SDCD) and experimental classification results in OpenSARShip data set for ship classification validate the proposed method's feasibility. The proposed system is lightweight and can be used in real-time applications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
367. Unsupervised double weighted graphs via good neighbours for dimension reduction of hyperspectral image.
- Author
-
Chou, Jiahui, Zhao, Siyu, Chen, Yingyi, and Jing, Ling
- Subjects
- *
WEIGHTED graphs , *PATTERN recognition systems , *EUCLIDEAN distance , *NEIGHBORS - Abstract
As the major research in pattern recognition, unsupervised dimension reduction is a challenging problem because of no label information. Most unsupervised dimension reduction methods usually construct similarity graph by k-nearest neighbour to preserve local structure in the low-dimensional subspace. However, k-nearest neighbour is calculated by Euclidean distance, which is sensitive to noise and outliers. And only considering local structure will reduce the classification accuracy. In this paper, a new unsupervised dimension reduction method called unsupervised double-weighted graphs via good neighbours (uDWG-GN) is proposed. First, uDWG-GN proposes a local structure Low-Rank Representation to learn similarity matrix and then uses the similarity matrix to find good neighbours of each sample. Second, according to good neighbours of the sample, uDWG-GN considers similar and dissimilar relationship between samples and constructs double-weighted graphs. Finally, based on L 2 , 1 norm, uDWG-GN finds the optimal projection matrix by maximizing the distance of dissimilar samples and minimizing the distance of similar samples. Experimental results on three hyperspectral images demonstrate the superiority and effectiveness of our method compared with other dimension reduction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
368. Accuracy and processing speed trade-offs in classical and quantum SVM classifier exploiting PRISMA hyperspectral imagery.
- Author
-
Shaik, Riyaaz Uddien and Periasamy, Shoba
- Subjects
- *
MACHINE learning , *QUANTUM computers , *SUPPORT vector machines , *HOLM oak , *REMOTE sensing , *SPEED , *QUBITS - Abstract
Quantum machine learning (QML) focuses on machine learning models developed explicitly for quantum computers. Availability of the first quantum processor led to further research, particularly exploring possible practical applications of QML algorithms in the remote sensing field. The demand for extensive field data for remote sensing applications has started creating bottlenecks for classical machine learning algorithms. QML is becoming a potential solution to tackle big data problems as it can learn from fewer data. This paper presents a QML model based on a quantum support vector machine (QSVM) to classify Holm Oak trees using PRISMA hyperspectral Imagery. Implementation of quantum models was carried on a quantum simulator and a real-time superconducting quantum processor of IBM. The performance of the QML model is validated in terms of dataset size, overall accuracy, number of qubits, training and predicting speed. Results were indicative that (i) QSVM offered 5% higher accuracy than classical SVM (CSVM) with 50 samples and ≥12 qubits/feature dimensions whereas with 20 samples at 16 Qubits/feature dimension, (ii) training time for QSVM at maximum accuracy was 284 s with 50 samples and with 20 samples was 53.68 s and (iii) predicting time for 400 pixels using the QSVM model trained with 50 samples dataset was 5243 s whereas with 20 samples dataset was 2845 s. Results were indicative that QML offers better accuracy but lack training and predicting speed for hyperspectral data. Another observation is that predicting speed of QSVM depends on the number of samples used to train the model. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
369. Recovery of impenetrable rough surface profiles via CNN-based deep learning architecture.
- Author
-
Aydin, İzde, Budak, Güven, Sefer, Ahmet, and Yapar, Ali
- Subjects
- *
ROUGH surfaces , *DEEP learning , *NUMERICAL solutions to integral equations , *CONVOLUTIONAL neural networks - Abstract
In this paper, a convolutional neural network (CNN)-based deep learning (DL) architecture for the solution of an electromagnetic inverse problem related to imaging of the shape of the perfectly electric conducting (PEC) rough surfaces is addressed. The rough surface is illuminated by a plane wave and scattered field data is obtained synthetically through the numerical solution of surface integral equations. An effective CNN-DL architecture is implemented through the modelling of the rough surface variation in terms of convenient spline type base functions. The algorithm is numerically tested with various scenarios including amplitude only data and shown that it is very effective and useful. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
370. MANet: a multi-level aggregation network for semantic segmentation of high-resolution remote sensing images.
- Author
-
Chen, Bingyu, Xia, Min, Qian, Ming, and Huang, Junqing
- Subjects
- *
AD hoc computer networks , *GLOBAL method of teaching , *DEEP learning - Abstract
With the continuous improvement of the segmentation effect for natural datasets, some studies have gradually been applied to high-resolution remote sensing images (HRRSIs). Due to a large amount of ground object information contained, even objects of the same type present the diversity and complexity of features in different periods or locations. The existing algorithms applied to semantic segmentation of remote sensing images are limited by the short-range context, and the high-resolution details, especially the edges, couldnot be fully recovered. Aiming at the problem, a multi-level aggregation network (MANet) is proposed. Firstly, the proposed global dependency module extracts deep global features by learning the interrelationships of all positions in the context, and filters redundant channel information as well. Secondly, MANet we proposed extends Multi-level Feature Aggregation Network by adding a simple and effective two-path feature refining module before each up-sample module to optimize the segmentation results. The two-path feature refining module uses two independent branches to obtain the features with different depths, which enriches the hierarchical structure of the network. Besides, it is combined with the subsequent up-sample module to effectively enhance MANet's ability to recover detailed information of HRRSI. Experimental results show that the methods proposed in the paper achieve competitive performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
371. Energy-based learning for open-set classification in remote sensing imagery.
- Author
-
Al Rahhal, Mohamad M., Bazi, Yakoub, Al-Dayil, Reham, Alwadei, Bashair M., Ammour, Nassim, and Alajlan, Naif
- Subjects
- *
REMOTE sensing , *CLASSIFICATION , *DATA distribution , *LOGITS - Abstract
Scene classification in remote sensing imagery is generally addressed from a closed set-setting perspective where the training and testing domains share the same land-cover classes. In practice, we may face situations where test images can belong to new land-cover classes unseen during the training phase. Yet, the classifier will wrongly assign them to one of these known training classes. This calls for the development of specific open-set methods with unknown image detection ability. In this paper, we propose, an end-to-end learning approach based on vision transformers. We use energy-based learning to jointly model the class labels and data distribution by reinterpreting the logits of the token classification head of the transformer to learn the density of the training data. This trick allows the network to act as a generative model while retaining its discriminative power. In the test phase, we identify images with low log-likelihood scores as unknown and discard them from classification. Experiments on three remote sensing scene datasets confirm the promising capability of the proposed open-set classification model. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
372. Gradient Guided Pyramidal Convolution Residual Network with Interactive Connections for Pan-sharpening.
- Author
-
Lai, Zhibing, Chen, Lihui, Liu, Zitao, and Yang, Xiaomin
- Subjects
- *
CONVOLUTIONAL neural networks , *REMOTE sensing , *IMAGE processing , *COMMUNITIES - Abstract
Convolutional neural networks (CNNs) have played a predominant role in the field of remote sensing over the last few years. As a significant branch of remote sensing image processing, pan-sharpening technique is to produce a high-resolution multi-spectral (HRMS) image based on a low-resolution multi-spectral (LRMS) image and a high-resolution (HR) panchromatic (PAN) image. Benefiting from the inherently powerful representation ability, deep-learning-based methods have also achieved promising and favourable performance in pan-sharpening community. However, these methods don't take advantage of the gradient characteristic which contains abundant structure information to guide the pan-sharpening process, failing to achieve the desired spatial preservation. In this paper, we propose a gradient-guided pyramidal convolution residual network with interactive connections (GGPCRN) to relieve the above issue. Specifically, besides the indispensable reconstruction branch, an auxiliary gradient branch providing additional structure information is built to guide the recovery process. Moreover, we introduce pyramidal convolution containing a series of filters with varying depth and size into our network to capture different scales of details for better performance. To further enhance the guidance of gradient maps, two measures are taken. On the one hand, interactive connections are proposed to transfer the mutual effect between the reconstruction branch and gradient branch. On the other hand, we incorporate a mild gradient loss to force a second-order restraint on the pan-sharpened images, making the network concentrate more on structure preservation. Both reduced-resolution and full-resolution experiments suggest that our GGPCRN performs favourably against other methods in terms of quantitative evaluations and visual improvements. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
373. A spatial-spectral feature based target detection framework for high‑resolution HSI.
- Author
-
Li, Yanshan, Chen, Shifu, Xu, Jianjie, Tang, Haojin, and Liu, Wenke
- Subjects
- *
SPECTRAL imaging , *REMOTE sensing , *MULTISPECTRAL imaging , *SPATIAL resolution - Abstract
Due to the development of hyperspectral image (HSI) technology, the high-resolution hyperspectral image (HRHSI) in remote sensing is becoming widely used. Compared to traditional HSI, HRHSI has extremely high resolution in both spatial and spectral domains. It contains more texture and spectral information than the low-resolution HSI (LRHSI), which can improve the target detection performance of HSI. However, the majority of the existing automatic target detection methods are only applicable to LRHSI. Therefore, this paper brings forward to a spatial-spectral feature-based target detection framework for HRHSI. First, a two-channel residual network is proposed, which aims to learn jointly spatial-spectral features from the spectral domain and spatial domain of HRHSI. Second, a spatial-spectral feature space is constructed to describe the distribution of the spatial-spectral feature of HRHSI, which can overcome the limitation of the number of training samples. A combined loss function is used to minimize within-class differences and maximize between-class distance in the spatial-spectral feature space. Finally, the detection map is received in the spatial-spectral feature space by calculating the Mahalanobis Distance and analysing the credibility of the target. The experimental results show that our algorithm achieves better target detection accuracy when the number of training samples is limited. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
374. Finding a suitable sensing time period for crop identification using heuristic techniques with multi-temporal satellite images.
- Author
-
Fernández-Sellers, Marcos, Siesto, Guillermo, Lozano-Tello, Adolfo, and Clemente, Pedro J.
- Subjects
- *
REMOTE-sensing images , *ARTIFICIAL neural networks , *AGRICULTURAL productivity , *AGRICULTURAL processing , *AGRICULTURAL policy - Abstract
Satellite crop identification processes are increasingly being used on a large scale, both to verify the crop and to improve production. As it is necessary to study phenological data over a period of time across a large territory, a lot of storage space is needed to save the satellite images and a lot of calculation time to analyse all this information. Sensing periods are usually established based on subjective expert criteria or previous experience. However, this decision may cause several differences when discriminating crop patterns, besides not guaranteeing good precision. These processes would greatly improve if the appropriate time periods could be found systematically using the minimum number of satellite images in the shortest possible time. In this paper, we propose a new methodology to determine a suitable sensing period for crop identification using Sentinel-2 images, applying hill climbing algorithms to the training sets of neural network models. We have used the method successfully in the 2020 Common Agricultural Policy campaign in the Extremadura region, Spain. The article also describes the use of the method in a case on tobacco detection in this region. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
375. New SAR target recognition based on YOLO and very deep multi-canonical correlation analysis.
- Author
-
Amrani, Moussa, Bey, Abdelatif, and Amamra, Abdenour
- Subjects
- *
STATISTICAL correlation , *SYNTHETIC aperture radar , *SPECKLE interference , *CONVOLUTIONAL neural networks , *CANONICAL correlation (Statistics) , *SUPPORT vector machines , *FEATURE extraction - Abstract
Synthetic Aperture Radar (SAR) images are prone to be contaminated by noise, which makes it very difficult to perform target recognition in SAR images. Inspired by great success of very deep convolutional neural networks (CNNs), this paper proposes a robust feature extraction method for SAR image target classification by adaptively fusing effective features from different CNN layers. First, YOLOv4 network is fine-tuned to detect the targets from the respective MF SAR target images. Second, a very deep CNN is trained from scratch on the moving and stationary target acquisition and recognition (MSTAR) database by using small filters throughout the whole net to reduce the speckle noise. Besides, using small-size convolution filters decreases the number of parameters in each layer and, therefore, reduces computation cost as the CNN goes deeper. The resulting CNN model is capable of extracting very deep features from the target images without performing any noise filtering or pre-processing techniques. Third, our approach proposes to use the multi-canonical correlation analysis (MCCA) to adaptively learn CNN features from different layers such that the resulting representations are highly linearly correlated and therefore can achieve better classification accuracy even if a simple linear support vector machine is used. Experimental results on the MSTAR dataset demonstrate that the proposed method outperforms the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
376. SCA-CDNet: a robust siamese correlation-and-attention-based change detection network for bitemporal VHR images.
- Author
-
Pang, Shiyan, Zhang, Anran, Hao, Jingjing, Liu, Fengzhu, and Chen, Jia
- Subjects
- *
DATA augmentation , *LAND cover , *NATURAL disasters , *END-to-end delay , *4G networks - Abstract
Change detection is a key step in various geographic information applications such as land cover change monitoring, agricultural assessment, natural disaster evaluation, and illegal building investigation. In practice, discovering, or outlining these changes is labour intensive and time-consuming. To address this problem, a novel end-to-end Siamese correlation-and-attention-based change detection network (SCA-CDNet) is proposed for bitemporal very-high-resolution images in this paper. In this method, five strategies are adopted to improve the final change detection results. First, data augmentation is used to reduce the overfitting effectively and improve the generalization ability of the training model. Second, in encoding, classic networks (e.g. ResNet) are introduced to extract the multiscale features of the image and make full use of the existing pretraining weights of the network to reduce the difficulty of subsequent model training. Third, a new correlation module is designed to stack the above bitemporal features correspondingly and extract change features with smaller dimensions. Fourth, an attention model is introduced between the correlation module and the decoder module to make the network pay more attention to areas or channels with a greater effect on change analysis. Fifth, a new weighted cross-entropy loss function is designed, which enables training to focus on error detection and improve the final accuracy of the training model. Finally, extensive experimental results on three public data sets including the evaluation of data augmentation, ablation study, and comparison with the state of the art demonstrate the effectiveness and superiority of our proposed method, achieving an intersection of union (IoU) of 84.15%, 83.50%, and 77.29% on the three data sets, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
377. Ship detection of optical remote sensing image in multiple scenes.
- Author
-
Li, Xungen, Li, Zixuan, Lv, Shuaishuai, Cao, Jing, Pan, Mian, Ma, Qi, and Yu, Haibin
- Subjects
- *
OPTICAL remote sensing , *RECURRENT neural networks , *REMOTE-sensing images , *FEATURE extraction , *SHIPS - Abstract
In view of characteristics of the ship in the optical remote-sensing image, such as multiple dimensions, majority of small objects, crowded arrangement and complex background, and so on, the paper presents a ship detection framework combining the network-fusing multi-level features crossing levels, the rotation region proposal network and the bidirectional recurrent neural network fusing self-attention mechanism. Firstly, we set up a network fusing multi-level features crossing levels because of the multiple scales and diverse characteristics of the remote-sensing ships to increase the precision of feature extraction of the ship, thus improving the performance in the multiple scales, small objects, and complex background problems. Secondly, we separately design the ROI Pooling Layer and the bidirectional recurrent neural network fusing self-attention mechanism, which infuses the prior information of ship dimension and position to realize a good performance and precise ship positioning in crowded scenes. Finally, we verify the effectiveness of the proposed method through experiments, the experimental dataset includes the private dataset designed by us based on Google Earth, the ship dataset in DOTA-Ship and HRSC2016 public ship dataset. The results verify the contributions of each proposed module, and the comparison results show that our proposed method has a state-of-the-art performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
378. Investigating the effect of the physical scattering mechanism of the dual-polarization sentinel-1 data on the temporal coherence optimization results.
- Author
-
Azadnejad, S., Maghsoudi, Y., and Perissin, D.
- Subjects
- *
SYNTHETIC aperture radar , *DEFORMATION of surfaces , *POLARIMETRY , *REMOTE sensing , *POLARIZATION interferometers - Abstract
Polarimetric Persistent Scatterer Interferometric Synthetic Aperture Radar (PS-InSAR) is an effective technique for increasing the number and phase quality of selected persistent scatterer (PS) pixels. In this technique, multitemporal polarimetric data is used to find the dominant scattering mechanism of targets in a stack of SAR data by polarimetric optimization and to improve the performance of PSI methods for deformation studies. The main goal of polarimetric optimization is to find the optimum scattering mechanism to generate interferograms with better quality. In this paper, we investigated the effect of the physical scattering mechanism on the temporal coherence optimization results. In this framework, we only optimized the physical scattering mechanism. This optimization is based on maximizing the temporal coherency criterion by changing the type of scattering mechanism to increase the number of PS with good phase quality. The proposed method is tested using a dataset of 17 dual-pol SAR data (VV/VH) acquired by Sentinel1-A satellite. This paper concludes that the phase quality of PS pixels can be improved by optimization of physical scattering mechanism. Also, the results show an overall increase of PS pixels density in different areas with respect to the conventional channel of VV. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
379. A random forest-based framework for crop mapping using temporal, spectral, textural and polarimetric observations.
- Author
-
Khosravi, Iman and Alavipanah, Seyed Kazem
- Subjects
- *
SYNTHETIC aperture radar , *RANDOM forest algorithms , *POLARIMETRY , *REMOTE sensing , *IMAGING systems - Abstract
Combining optical and polarimetric synthetic aperture radar (PolSAR) earth observations offers a complementary data set with a significant number of spectral, textural, and polarimetric features for crop mapping and monitoring. Moreover, a temporal combination of both sources of information may lead to obtaining more reliable results compared to the use of single-time observations. In this paper, an operational framework based on the stacked generalization of random forest (RF), which efficiently employed bi-temporal observations of optical and radar data, was proposed for crop mapping. In the first step, various spectral, vegetation index, textural, and polarimetric features were extracted from both data sources and placed into several groups. Each group was classified separately using a single RF classifier. Then, several additional classification tasks were accomplished by another RF classifier. The earth observations used in this paper were collected by RapidEye satellites and the Unmanned Aerial Vehicle Synthetic Aperture Radar (UAVSAR) system over an agricultural region near Winnipeg, Manitoba, Canada. The results confirmed that the proposed methodology was able to provide a higher overall accuracy and kappa coefficient than traditional stacking method, and also than all the individual RFs using each group. These accuracy metrics were also better than those of the RFs using the stacked features. Moreover, only the proposed methodology could achieve standard accuracy (F-score ≥85%) for all crop types in the study area. The visual comparison also demonstrated that the crop maps produced by the proposed methodology had more homogeneous, uniform appearances. Moreover, the mixed pixels of crop types, which abundantly existed in the traditional stacking and individual RFs̕ maps, were significantly eliminated. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
380. Automatic on-orbit geometric calibration framework for geostationary optical satellite imagery using open access data.
- Author
-
Dong, Yang, Fan, Dazhao, Ma, Qiuhe, Ji, Song, and Ouyang, Huan
- Subjects
- *
CALIBRATION , *GEOSTATIONARY satellites , *REMOTE-sensing images , *OPEN access publishing - Abstract
Due to a variety of factors, long-term on-orbit geometric calibrations must be performed on the geostationary optical satellite to meet the subsequent high-precision geometric processing requirements. Designing a fully automatic on-orbit geometric inspection and calibration process has great application value. In this paper, we use open-access geographic information data to achieve a more robust automatic on-orbit geometric calibration for the imaging characteristics of the geostationary optical satellite. Experiments with the high-resolution geostationary optical satellite GaoFen4. Results show that the process designed in this paper enables automatic on-orbit geometric calibration of the geostationary optical satellite and obtain high-accuracy calibration parameters, thus effectively improving the geometric positioning accuracy of satellite imagery. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
381. Energy-based cloud detection in multispectral images based on the SVM technique.
- Author
-
Sui, Yanlin, He, Bin, and Fu, Tianjiao
- Subjects
- *
PIXELS , *DIGITAL images , *CLOUD computing , *IMAGE processing , *EXTRACTION techniques , *SUPPORT vector machines - Abstract
In this paper, the energy characteristics of Gabor texture are used for cloud detection in high-resolution multispectral images. First, the satellite remote-sensing image is divided into superpixels using simple linear iterative clustering (SLIC), and then, the energy characteristics of Gabor texture and spectral characteristics are computed by extracting the texture features of the superpixels. The features of the cloud superpixels are used as the learning sample of the support vector machine (SVM) classifier, and a classification model is obtained by training the SVM classifier. Finally, a cloud-detection experiment is conducted for various sensor images with three visible bands and one near-infrared band. The experimental results showed that the proposed method provides an excellent average overall accuracy for thick and thin clouds in a complex background of forests, harbours, snow and mountains. The characteristic parameters of this paper are not limited by the image parameters; thus, they provide good results and universality for various types of sensors. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
382. Comparing the accuracy of estimated terrain elevations across spatial resolution.
- Author
-
Ghandehari, Mehran, Buttenfield, Barbara P., and Farmer, Carson J. Q.
- Subjects
- *
GEOGRAPHIC information systems , *DIGITAL elevation models , *INTERPOLATION , *SPATIAL ability , *TERRAIN mapping - Abstract
Terrain is modelled in Geographic Information Science on a grid, assuming that elevation values are constant within any single pixel of a Digital Elevation Model (DEM). Pixels are considered flat and rigid, for computational simplicity (a 'rigid pixel' paradigm). This paradigm does not account for the slope and curvature of terrain within each pixel, generating imprecise measurements, particularly as pixel size increases or in uneven terrain. This paper relaxes the rigid pixel assumption, allowing for possible sub-pixel variations in slope and curvature (a 'surface-adjusted' paradigm). This paper compares different interpolation methods to investigate sub-pixel variations for estimating elevation of arbitrary points given a regular grid. Tests interpolating elevation values for 20,000 georeferenced off-centroid random points from a regular grid DEM are presented, using a variety of exact and inexact local deterministic interpolation methods within contiguity configurations incorporating first and second order neighbours. The paper examines the accuracy of surface-adjusted estimations across a progression of spatial resolutions (10 m, 30 m, 100 m, and 1,000 m DEMs) and a suite of terrain types varying in latitude, altitude, slope, and roughness, validating off-centre estimates against direct elevation measurements on 3 m resolution lidar DEM. Results illustrate that the Bi-quadratic and Bi-cubic interpolation methods outperform Weighted Average, Linear, and Bi-linear methods at coarse resolutions and in rough or non-uniform terrain. In smooth or flat terrain and at finest resolutions, the interpolation method impacts estimation accuracy less or not at all. The type of contiguity configuration appears to play a role in estimation errors as well, with tighter neighbourhoods exhibiting higher accuracy. The analysis also examined regularized mathematical surfaces, adding autocorrelated randomly distributed noise to simulate terrain. The results of experiments based on regularized smooth mathematical surfaces do not translate directly to terrain modelling. The analysis also considers the balance between the increased computation times needed to measure surface-adjusted elevation against improvements in accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
383. Squinted SAR focusing for improving automatic radar sounder data analysis and enhancement.
- Author
-
Ferro, A.
- Subjects
- *
SYNTHETIC aperture radar , *IMAGING systems in meteorology , *PROBLEM solving , *INFORMATION theory , *IMAGE processing - Abstract
Radar sounder (RS) instruments are providing a huge amount of subsurface data. In order to support the study of this data, several automatic methods have been recently proposed. So far, the development of such methods mostly focused on publicly available radargrams (standard products), which are generated from raw data in order to obtain high visual quality images. The possibility to exploit raw processing to derive additional information for automatic analyses has not yet been considered. In order to fill this gap, in this paper, we show that by properly tuning raw signal processing it is possible to automatically obtain additional a priori information on subsurface targets. Such information can be used to potentially improve the results of further automatic analyses and/or address problems that cannot be easily solved automatically using only standard products. In particular, we propose four measurements obtained using squinted synthetic aperture radar focusing that provide useful physical information about subsurface features. Moreover, in order to prove the effectiveness of the proposed approach, a novel preprocessing method for automatic layer detection techniques based on the concepts developed in this paper is presented and validated. All the examples reported in the paper use real planetary RS data acquired by the SHAllow RADar instrument on Mars. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
384. Annual land-cover mapping based on multi-temporal cloud-contaminated landsat images.
- Author
-
Xie, Shuai, Liu, Liangyun, Zhang, Xiao, and Chen, Xidong
- Subjects
- *
LAND cover , *LANDSAT satellites , *CLOUDS , *SUPPORT vector machines , *PIXELS - Abstract
Landsat images, which have fine spatial resolution, are an important data source for land-cover mapping. Multi-temporal Landsat classification has become popular because of the abundance of free-access Landsat images that are available. However, cloud cover is inevitable due to the relatively low temporal frequency of the data. In this paper, a novel approach for multi-temporal Landsat land-cover classification is proposed. The land cover for each Landsat acquisition date was first classified using a Support Vector Machine (SVM) and then the classification results were combined using different strategies, with missing observations allowed. Three strategies, including the majority vote (MultiSVM-MV), Expectation Maximisation (MultiSVM-EM) and joint SVM probability (JSVM), were used to merge the multi-temporal classification maps. The three algorithms were then applied to a region of the path/row 143/31 scene using 2010 Landsat-5 Thematic Mapper (TM) images. The results demonstrated that, for these three algorithms, the average overall accuracy (OA) improved with the increase in temporal depth; also, for a given temporal depth, the performance of JSVM was clearly better than that of MultiSVM-MV and MultiSVM-EM, and the performance of MultiSVM-EM was slightly better than that of MultiSVM-MV. The OA values for the three classification results, which use all epochs, were 70.28%, 72.40% and 74.80% for MultiSVM-MV, MultiSVM-EM and JSVM, respectively. In comparison, two other annual composite image-based classification methods, annual maximum Normalised Difference Vegetation Index (NDVI) composite image-based classification and annual best-available-pixel (BAP) composite image-based classification, gave OA values of 68.08% and 69.92%, respectively, meaning that our method produced a better performance. Therefore, the novel multi-temporal Landsat classification method presented in this paper can deal with the cloud-contamination problem and produce accurate annual land-cover mapping using multi-temporal cloud-contaminated images, which is of importance for regional and global land-cover mapping. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
385. Anthropogenic aerosol emissions mapping and characterization by imaging spectroscopy - application to a metallurgical industry and a petrochemical complex.
- Author
-
Philippets, Yannick, Foucher, Pierre-Yves, Marion, Rodolphe, and Briottet, Xavier
- Subjects
- *
AEROSOLS , *METALLURGICAL plants , *EMISSIONS (Air pollution) , *SPECTRAL imaging , *CARBON , *MICROPHYSICS - Abstract
This paper is focused on the retrieval of industrial aerosol optical thickness (AOT) and microphysical properties by means of airborne imaging spectroscopy. Industrial emissions generally lead to optically thin plumes requiring an adapted detection method taking into account the weak proportion of particles sought in the atmosphere. To this end, a semi-analytical model combined with the Cluster-Tuned Matched Filter (CTMF) algorithm is presented to characterize those plumes, requiring the knowledge of the soil under the plume. The model allows the direct computation of the at-sensor radiance when a plume is included in the radiative transfer. When applied to industrial aerosol classes as defined in this paper, simulated spectral radiances can be compared to 'real' MODTRAN (Moderate Resolution Atmospheric Transmission) radiances using the Spectral Angle Mapper (SAM). On the range from 0.4 to 0.7 µm, for three grounds (water, vegetation, and bright one), SAM scores are lower than 0.043 in the worst case (a both absorbing and scattering particle over a bright ground), and usually lower than 0.025. The darker the ground reflectance is, the more accurate the results are (typically for reflectance lower than 0.3). Concerning AOT retrieval capabilities, with a pre-calculated model for a reference optical thickness of 0.25, we are able to retrieve plume AOT at 550 nm in the range 0.0 to 0.4 with an error usually ranging between 9% and 13%. The first test case is a CASI (Compact Airborne Spectrographic Imager) image acquired over the metallurgical industry of Fos-sur-Mer (France). First results of the use of the model coupled with CTMF algorithm reveal a scattering aerosol plume with particle sizes increasing with the distance from the stack (from detection score of 54% near the stack for particles with a diameter of 0.1 µm, to 69% away from it for 1.0 µm particles). A refinement is made then to estimate more precisely aerosol plume properties, using a multimodal distribution based on the previous results. It leads to find a mixture of sulfate and brown carbon particles with a plume AOT ranging between 0.2 and 0.5. The second test case is an AHS (Airborne Hyperspectral Scanner) image acquired over the petrochemical site of Antwerp (Belgium). The first CTMF application results in detecting a brown carbon aerosol of 0.1 µm mode (detection score is 51%). Refined results show the evolution of the AOT decreasing from 0.15 to 0.05 along the plume for a mixture of brown carbon fine mode and 0.3 µm radius of sulfate aerosol. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
386. Dynamic estimating of wetland vegetation cover based on linear spectral mixture and time phase transformation models.
- Author
-
Li, Zhe, Gong, Zhaoning, Guan, Hui, and Zhang, Qiang
- Subjects
- *
REMOTE sensing , *DROUGHTS , *LANDSAT satellites , *GROUND vegetation cover , *PHASE transitions - Abstract
Globally, remote sensing is being used to monitor vegetation degradation in areas of concern. In recent years, drought and water shortages have caused significant degradation of the wetland vegetation in Zhalong Wetland of Heilongjiang province, China. This paper employed middle- and high-resolution Landsat images to construct a Linear Spectral Mixture Analysis of the wetland, with the end member extraction verified by feasibility analysis and with vegetation cover data extracted over nearly 30 years. By considering the problem of poor timing with middle- and high-resolution images, this paper proposes a phase-transform method that combines the time advantage of moderate-resolution spectroradiometer images with the spatial advantage of high-resolution Landsat imagery. Based on an intensity analysis model, the temporal and spatial characteristics of vegetation cover in the study area were analyzed using a time scale and the level of vegetation cover. The results show that (1) from 1985 to 2015, the vegetation cover showed an overall tendency to degrade, and (2) vegetation cover was extracted based on the phase transformation and linear spectral mixture models with an accuracy of 0.8628, which is higher than that of traditional remote sensing methods. Improving the prediction accuracy in vegetation transfer is of great theoretical value in relation to global climate change. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
387. Characterizing seasonal and long-term dynamics of a lacustrine wetland in Xinjiang, China, using dense time-series remote sensing imagery.
- Author
-
Zhang, Jiudan, Li, Junli, Bao, Anming, Warner, Timothy a, Li, Longhui, Chang, Cun, Bai, Jie, and Liu, Tie
- Subjects
- *
REMOTE sensing , *WETLANDS , *WETLAND restoration , *WATER use , *WETLAND ecology - Abstract
Wetland degradation and ecological restoration has become a topic of key research interest, particularly in the context of climate change and excessive water resource utilization. The Aiximan wetland, a major lacustrine wetland in Xinjiang, China, has shrunk in area by approximately 80% over the last 30 years. An ecological water transfer project (EWTP) was implemented in 2017 in an attempt to reverse this shrinking. The project involves seasonal flooding through three canals during the growing season of the wetland. In this paper, we explore the history of the Aiximan wetland over the last 30 years using a dense time series of remote sensing images to analyse the spatio-temporal changes of the wetland. We focus on intermittent flood inundation and wetland vegetation growth before and after the EWTP initiation, as well as the role of agricultural expansion in the decline of the wetland. The results indicate the following: (1) The annual maximum of open water area, which had fallen to 25.91 km2 by 2016, increased to 47.54 km2 in 2019 due to the water transfers. However, although the area of wetland vegetation also increased following the water transfers, it did not regain the maximum extent it had in 2013. (2) The percentage of the total wetland area with vegetation cover steadily declined from 54.2% in 1990 to 13.4% in 2019, and 72.84% of the loss of wetland was due to conversion to farmland. (3) The loss of wetland vegetation over the last 20 years corresponds to a period when the wetland water area fell below a threshold of 30 km2, and the wetland vegetation accounted for less than 60% of the wetland area. (4) The Aiximan wetland has recently shown signs of recovery due to the EWTP. However, the remote sensing time series suggests that wetland vegetation restoration will require the wetland water area to exceed 30 km2. Adjusting the timing of intermittent flood inundation to encourage vegetation growth may also be important for sustainable restoration of the Aiximan wetland ecology. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
388. Hyperspectral image classification using multiple weighted local kernel matrix descriptors.
- Author
-
Asghari Beirami, Behnam and Mokhtarzade, Mehdi
- Subjects
- *
HYPERSPECTRAL imaging systems , *COVARIANCE matrices , *REMOTE sensing , *CLASSIFICATION , *GABOR filters , *MATRICES (Mathematics) - Abstract
Local covariance matrix descriptor is a new spatial-spectral feature generation method. It has been successfully applied for remote sensing image classification. Meanwhile, there are some critiques of it because it neglects nonlinear relationships between features, which are serious when applied to hyperspectral images (HSIs). So, the present paper aims to develop weighted local kernel matrix (WLKM) descriptors for the spatial-spectral classification of HSI. The developed weighted local kernel matrix features, including spectral-textural-geometrical aspects, have been used in two classification schemes proposed. In the first approach, called 'early fusion', the weighted sum of WLKM descriptors derived from spectral and spatial features is classified using the log-Euclidean kernel SVM. In the second approach, called 'late fusion', a multiple log-Euclidean kernel SVM strategy based on the WLKM descriptors of spatial and spectral features is developed for HSI classification. Experiments on three widely used HSI datasets have proved the superiority of the proposed approaches over some recent spatial-spectral HSI classification techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
389. SAR image edge detection: review and benchmark experiments.
- Author
-
Meester, M. J. and Baslamisli, A. S.
- Subjects
- *
SYNTHETIC aperture radar , *OPTICAL detectors , *FALSE positive error , *OPTICAL images , *SPECKLE interference , *SURVEILLANCE detection , *EDGE detection (Image processing) - Abstract
Edges are distinct geometric features crucial to higher level object detection and recognition in remote-sensing processing, which is a key for surveillance and gathering up-to-date geospatial intelligence. Synthetic aperture radar (SAR) is a powerful form of remote-sensing. However, edge detectors designed for optical images tend to have low performance on SAR images due to the presence of the strong speckle noise-causing false-positives (type I errors). Therefore, many researchers have proposed edge detectors that are tailored to deal with the SAR image characteristics specifically. Although these edge detectors might achieve effective results on their own evaluations, the comparisons tend to include a very limited number of (simulated) SAR images. As a result, the generalized performance of the proposed methods is not truly reflected, as real-world patterns are much more complex and diverse. From this emerges another problem, namely, a quantitative benchmark is missing in the field. Hence, it is not currently possible to fairly evaluate any edge detection method for SAR images. Thus, in this paper, we aim to close the aforementioned gaps by providing an extensive experimental evaluation for SAR images on edge detection. To that end, we propose the first benchmark on SAR image edge detection methods established by evaluating various freely available methods, including methods that are considered to be the state of the art. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
390. EMS-CDNet: an efficient multi-scale-fusion change detection network for very high-resolution remote sensing images.
- Author
-
Zheng, Zhi, Wan, Yi, Zhang, Yongjun, Yang, Kun, Xiao, Rang, Lin, Chao, Wu, Qiong, and Peng, Daifeng
- Subjects
- *
REMOTE sensing , *IMAGE registration , *SURFACE of the earth , *DEEP learning , *FEATURE extraction - Abstract
Remote sensing image change detection (RSICD) is an essential measure for monitoring the earth's surface changes. In recent years, the explosive growth of very high-resolution (VHR) satellite sensors and the booming innovations in deep learning technology have significantly boosted RSICD development. However, most of the current RSICD models focus on locating accurate change areas while ignoring the efficiency of their method, which limits the practical application of RSICD models, especially for large-scale and emergency RSICD tasks. In this paper, we propose an Efficient Multi-scale-fusion Change Detection Network (EMS-CDNet) for bi-temporal RSICD tasks. Our EMS-CDNet pays more attention to the model's inference speed and the accuracy-efficiency trade-off rather than only pursuing detection accuracy. We designed a multi-scale fusion module for EMS-CDNet, which adopts multi-scale and multi-branch operations to extract multi-scale features simultaneously and aggregate features at different feature levels. In addition to EMS-CDNet's ability to achieve sufficient feature extraction, the multi-scale image input within the designed module alleviates the influence of image registration errors in practical applications, thereby strengthening EMS-CDNet's value for practical RSICD tasks. We also integrated a novel partition unit in EMS-CDNet to lighten the model while maintaining the detection ability of small targets, thus shortening its processing time without a severe accuracy decrease. We conducted experiments on two state-of-the-art (SOTA) public RSICD datasets and our own collected dataset. The public datasets were utilized to comparatively measure the overall accuracy and efficiency measurement of EMS-CDNet, and the dataset of images we collected was used to observe EMS-CDNet's performance under the influence of image registration errors. Our experimental results show that EMS-CDNet achieved a better accuracy-efficiency trade-off than the SOTA public datasets methods. For example, EMS-CDNet reduced the inference time by about 33% while maintaining identical detection accuracy to CLNet (the optimal method among the comparison methods). Furthermore, EMS-CDNet achieved higher accuracy on our collected dataset, with an F1 of 74% and mIoU of 0.806, demonstrating its robustness to image registration errors and showing its value for practical RSICD applications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
391. Segmentation of low scattering region in SAR images using multi-module fusion network.
- Author
-
Yang, Xiaqing, Zhou, Yuanyuan, Chen, Tingjun, Shi, Jun, and Cui, Guolong
- Subjects
- *
SYNTHETIC aperture radar , *NETWORK performance , *DATA mining , *MARKOV random fields , *CONVOLUTIONAL neural networks - Abstract
The proposed multi-module fusion network (MMFNet) is designed for the segmentation of low scattering regions such as roads, waters, and shadows in synthetic aperture radar (SAR) images in this paper. It is primarily comprised of three modules, i.e. high-resolution backbone network module, spatial pyramid pooling convolution (SPPC) module, and channel attention module, and trained with weighted cross-entropy loss. The high-resolution backbone network works to retain high resolution of feature maps and reduce spatial accuracy loss, which contributes to the extraction of edge information. SPPC module performs multi-scale feature fusion, extracts target areas with different sizes and improves network accuracy. Channel attention module intensifies network expression of category information, thus further improves network performance. Our experimental analysis using real SAR data shows that MMFNet achieves good low scattering region segmentation, with mean IoU (MIoU) reaching up to 82.5 %. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
392. Xcep-Dense: a novel lightweight extreme inception model for hyperspectral image classification.
- Author
-
Ranjan, Pallavi and Girdhar, Ashish
- Subjects
- *
DEEP learning , *CONVOLUTIONAL neural networks , *COMMUNITIES - Abstract
Convolutional neural networks have demonstrated remarkable performance in capturing discriminative features in the hyperspectral domain. Deep learning and CNN have carved out a distinct niche in the remote-sensing community having the potential to classify high-dimensional images. However, the challenge of limited labelled samples and high dimensional hyperspectral data aggravates overfitting in deep networks. Many existing works produce high accuracy but are computationally slower due to the massive computations involved. The Inception model, on the other hand, won the ImageNet Large Scale Visual Recognition Competition (ILSVRC) 2014 challenge and has been validated as a top-tier model by maintaining a trade-off between classification accuracy and computing time. In this paper, we propose a novel lightweight network, Xcep-Dense, a hybrid classification model that combines the core benefits of the extreme version of inception and dense networks. The Xception network employs depth-wise separable convolutions, and the 3D slicing phenomenon, which requires fewer parameters, is computationally efficient and provides excellent classification accuracy. The proposed network is configured with dense network and optimization parameters to alleviate overfitting and gradient vanishing. Xcep-Dense's performance is validated using two benchmark hyperspectral datasets, Indian Pines and Salinas. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
393. Soft hypergraph regularized weighted low rank subspace clustering for hyperspectral image band selection.
- Author
-
Xu, Jinhuan, Yan, Guang, Zhao, Xingwen, Ai, Mingshun, Li, Xiangdong, and Liu, Pengfei
- Subjects
- *
LOW-rank matrices , *WEIGHTED graphs , *CAYLEY graphs , *SIMILARITY (Geometry) - Abstract
Recently, the graph regularized low-rank representation (GLRR) has been introduced in Hyperspectral Image (HSI) to explore global structures information by exploring the lowest-rank representation of all the data jointly and the local geometrical structure by the graph regularization. However, the traditional graph models are mostly based on a simple intrinsic structure. In this paper, to represent the complex intrinsic band information and further enhance the low rank of the matrix, we propose a soft hypergraph regularized weighted low-rank subspace clustering (HGWLRSC) method for HSI band selection. On the one hand, considering the complex correlation between adjacent bands, hypergraph technique is introduced, which take advantage of the band similarity properties to extract more valuable information and reveals the intrinsic multiple relationships of HSI band sets. On the other hand, the weighted low-rank subspace clustering model is introduced to not only capture the global structure information for the learned representation coefficient matrix but also to consider the importance of different rank components. The proposed algorithm was tested on three widely used hyperspectral data sets, and the experimental results indicate that the proposed HGWLRSC algorithm outperforms the other state-of-the art methods and achieves a very competitive band selection performance for HSI. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
394. Soil moisture retrieval using space-borne GNSS reflectometry: a comprehensive review.
- Author
-
Rahmani, Mina, Asgari, Jamal, and Asgarimehr, Milad
- Subjects
- *
SOIL moisture , *SOIL moisture measurement , *REFLECTOMETRY , *SURFACE of the earth , *IRRIGATION management , *BISTATIC radar - Abstract
Accurate knowledge of soil moisture is critical for hydrological and agricultural applications such as agricultural irrigation management, drought characterization, and flood detection. Researchers have attempted to provide soil moisture using various methods and techniques. Traditionally, the amount of soil moisture was based on field measurements. On the other hand, remote sensing satellites have been widely used to provide continuous soil moisture measurements worldwide, encountering problems such as the lack of simultaneous spatial and temporal sampling rates and dependence on weather conditions. However, in recent decades, GNSS signals reflected from the Earth's surface (GNSS-R technique) have been increasingly used for soil moisture monitoring, due to the numerous advantages it offers. This paper aims to provide a comprehensive review of soil moisture retrieved by two space-based GNSS-R missions (TDS-1 and CYGNSS) to show the general past trends, gaps, and opportunities for soil moisture monitoring through GNSS-R observations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
395. Pairwise coarse registration of point clouds by traversing voxel-based 2-plane bases.
- Author
-
Fu, Yongjian, Li, Zongchun, Xiong, Feng, He, Hua, Wang, Wenqi, and Deng, Yong
- Subjects
- *
OPTICAL scanners , *POINT cloud , *IMAGE registration , *ROOT-mean-squares , *DIHEDRAL angles , *BASE pairs , *RECORDING & registration , *GEOSTATIONARY satellites - Abstract
To obtain the complete coverage when scanning a large-scale scene using a laser scanner, registration of pairwise point clouds by the corresponding geometric features is usually necessary under the condition of insufficient direct geographic reference information of GNSS sensors. In this paper, we propose a target-less coarse registration method for pairwise point clouds using the corresponding voxel-based planes, which are identified by a pair of conjugate 2-plane bases. The voxel-based planes are firstly extracted from the entire point cloud with 3D cubic grids decomposed by the octree-based voxelization; Then, the 2-plane bases in each point cloud are constructed, and by comparing the dihedral angles of two 2-plane bases that come from the source and target point clouds, respectively, the conjugate 2-plane base pairs are generated one by one; Next, a set of plane correspondences is identified by a pair of conjugate 2-plane bases, and its corresponding largest consistency planes (LCP) set is calculated; Finally, a series of plane correspondence sets are obtained using the generated pairs of conjugate 2-plane bases, and the one with the highest LCP is used to compute the transformation matrix between the pairwise point clouds. Experimental results revealed that our proposed pairwise coarse registration method can be effective for aligning point clouds acquired from indoor and outdoor scenes, with rotation errors less than 0.4 degrees, translation errors less than 0.4 m, root mean square distance (RMSD) less than 0.42 m, and successful registration rate (SRR) about 98%. Furthermore, our proposed method was more efficient than the point- and line-based methods under the same hardware and software configuration conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
396. Hyperspectral image super-resolution based on attention ConvBiLSTM network.
- Author
-
Lu, Xiaochen, Liu, Xiaohui, Zhang, Lei, Jia, Fengde, and Yang, Yunlong
- Subjects
- *
MULTISPECTRAL imaging , *SPECTRAL imaging , *HIGH resolution imaging , *RECURRENT neural networks , *HYPERSPECTRAL imaging systems , *CONVOLUTIONAL neural networks , *SEQUENTIAL pattern mining , *SPATIAL resolution - Abstract
In this paper, a hyperspectral (HS) image super-resolution (SR) approach based on attention convolutional bi-long short-term memory (ConvBiLSTM) network is proposed, aiming to explore the collaborative spatial and spectral attention characteristics, thereby enhancing the spatial resolution of HS image. ConvBiLSTM network combines the spatial feature mining and sequential predicting abilities of convolutional neural network and recurrent neural network, respectively. We adapt the ConvBiLSTM network for our super-resolution purpose by regarding each band as a single frame of sequential data, and propose a band-sharing spatial-channel attention-combined ConvBiLSTM SR method to intensify the saliency features. Moreover, a spatial-regularized loss function is presented to further promote the fidelity of the super-resolved HS image. Experiments on four HS data sets show that the proposed approach outperforms some state-of-the-art HS image SR techniques, from the aspect of spectral fidelity. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
397. A novel unsupervised multiple change detection method for VHR remote sensing imagery using CNN with hierarchical sampling.
- Author
-
Fang, Hong, Du, Peijun, and Wang, Xin
- Subjects
- *
REMOTE sensing , *CONVOLUTIONAL neural networks , *VECTOR analysis - Abstract
Detecting multiple changes from remote sensing imagery is a research hotspot. Very high resolution (VHR) images contain detailed spatial information and thus are often used in multiple change detection (CD). Compared with supervised multiple CD methods, unsupervised methods are more attractive, due to the ability of extracting changes automatically. However, many existing unsupervised methods fail to well adaptively make use of the high-level features relevant to multiple changes in VHR images in some cases. In this paper, a novel unsupervised multiple CD method for VHR images is proposed. First, the magnitude of spectral change vectors (SCVs) is calculated by change vector analysis, and fuzzy c-means clustering is performed to generate the unchanged and candidate changed samples. Secondly, the candidate changed samples are further clustered based on the direction of SCVs, and multiple changed samples are selected using a local window. Finally, image patches composed of neighbourhood areas of the generated samples are fed into a convolutional neural network (CNN) for training, and the multiple change map is obtained by the trained CNN. Experiments were performed on four data sets, and results indicated that the proposed unsupervised multiple CD approach outperformed some other state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
398. AANet: an attention-based alignment semantic segmentation network for high spatial resolution remote sensing images.
- Author
-
Xue, Gunagkuo, Liu, Yikun, Huang, Yuwen, Li, Mingsong, and Yang, Gongping
- Subjects
- *
REMOTE sensing , *SPATIAL resolution , *DATA mining - Abstract
In this paper, we present an efficient network to tackle three critical problems in high spatial resolution (HSR) remote sensing image segmentation: (i) feature misalignment, (i i) insufficient contextual information extraction and (i i i) various class imbalance issues. In detail, we propose a novel Feature Alignment Block (FAB) to suppress misalignment issues with the guide of an anchor map. Further, to extract sufficient information, we design a Contextual Augmentation Block (CAB) to augment features of different semantic levels. Finally, we present an Annealing Online Hard Example Mining (AOHEM) strategy to handle the various class imbalance issues with a view to dynamically adjust the focus of the network. We apply the above proposed designs to FPN to form our Attention-based Alignment Network (AANet). Experimental results demonstrate that the proposed method achieves promising results on the challenging iSAID and Vaihingen datasets with a better trade-off between accuracy and complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
399. Vertex extraction-guided local back-projection algorithm for velocity estimation using ground penetrating radar data.
- Author
-
Xu, Zihan, Su, Fulin, and Xu, Guodong
- Subjects
- *
GROUND penetrating radar , *VELOCITY , *ELECTROMAGNETIC waves , *ALGORITHMS , *HYPERBOLA - Abstract
Estimating the velocity of electromagnetic waves in the medium is essential for ground penetrating radar (GPR) to reconstruct target scenes. However, the existing velocity estimation methods based on imaging algorithms, e.g. the back-projection (BP)- and the Stolt migration-based methods, are susceptible to clutter and cannot strike a balance between depth independence and computational efficiency. Therefore, a velocity estimation method is proposed based on the vertex extraction-guided local BP algorithm. Initially, the high-order moment (HOM) is introduced into GPR data by analysing the distributions of target hyperbolas and clutter. Designed to avoid clutter, the proposed HOM sliding window extracts the hyperbola vertex at a high signal-to-clutter ratio region. By virtue of the extracted vertex, the local hyperbola region is subsequently extracted to reduce computation. Additionally, this paper adopts the depth-independent BP algorithm and further develops a local BP algorithm utilizing its imaging flexibility. Guided by the extracted vertex, the proposed local BP algorithm is iteratively performed on the local hyperbola region to estimate the velocity by a focus metric. Comprehensive experiments demonstrate the effectiveness and robustness of the proposed velocity estimation method. The proposed method outperforms the existing methods in clutter robustness, computational efficiency, and depth independence. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
400. CNN-Enhanced graph attention network for hyperspectral image super-resolution using non-local self-similarity.
- Author
-
Liu, Cong and Dong, Yaxin
- Subjects
- *
HIGH resolution imaging , *CONVOLUTIONAL neural networks , *FEATURE extraction , *SPECTRAL imaging , *GRAPH algorithms - Abstract
The small-sample problem that widely existed in the hyperspectral image (HSI) super-resolution task will lead to insufficient feature extraction in network training. Therefore, it is necessary to design an effective network to extract the feature of HSIs fully. In addition, existing HSI super-resolution (SR) networks usually capture multiple receptive fields by staking massive convolutions, which will inevitably produce many parameters. In this paper, we propose a novel HSI SR network based on the convolution neural network enhanced graph attention network (CEGATSR), which can fully capture different features by using a graph attention block (GAB) and a depthwise separable convolution block (DSCB). Moreover, the graph attention block can also capture different receptive fields by using relatively few layers. Specifically, we first divide the whole spectral bands into several groups and extract the features separately for each group to reduce the parameters. Second, we design a parallel feature extraction unit to extract non-local and local features by combining the graph attention block (GAB) and the depthwise separable convolution block (DSCB). The graph attention block makes full use of the non-local self-similarity strategy not only to self-learn the effective information but also to capture the multiple receptive fields by using relatively few parameters. The depthwise separable convolution block is designed to extract the local feature information with few parameters. Third, we design a spatial-channel attention block (SCAB) to capture the global spatial-spectral features and to distinguish the importance of different channels. A large number of experiments on three hyperspectral datasets show that the proposed CEGATSR performs better than the state-of-the-art SR methods. The source code is available at [Online]. Available: . [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.