6 results on '"Liu, Shanwei"'
Search Results
2. Superpixel-Based Graph Laplacian Regularized and Weighted Robust Sparse Unmixing
- Author
-
Zou, Xin, Xu, Mingming, Liu, Shanwei, and Sheng, Hui
- Abstract
The sparse unmixing (SU) technique is widely used in hyperspectral image (HSI) unmixing because it does not need to estimate the number of pure endmembers but directly obtains the spectra from known spectral libraries to construct the endmember matrix, which avoids the influence of endmember extraction on unmixing. However, some existing SU algorithms still have problems, such as insufficient consideration of abundance details and sensitivity to noise. In order to solve the above issues, this article proposes a graph Laplacian weighted robust SU (RSU) algorithm based on superpixels, which can better reconstruct abundance details and reduce sensitivity to noise. The coarse abundance is calculated based on the superpixel results, and then the global spatial prior weight is calculated. Then, weighted RSU is applied to each superpixel to achieve a combination of local and global cooperation to reduce sensitivity to noise. On this basis, in order to better reconstruct the abundance details, the spatial position information and spectral information between pixels within superpixels are used to construct a weighted map to represent the similarity between pixels. Finally, the alternating direction multiplier method (ADMM) is used to perform structural optimization on the superpixel scale, retaining the structural information of abundance and reducing the amount of calculation. Experiments are conducted on three simulated datasets and three real datasets, and the results show that the proposed algorithm outperforms state-of-the-art SU algorithms.
- Published
- 2024
- Full Text
- View/download PDF
3. Ultralightweight Feature-Compressed Multihead Self-Attention Learning Networks for Hyperspectral Image Classification
- Author
-
Li, Xinhao, Xu, Mingming, Liu, Shanwei, Sheng, Hui, and Wan, Jianhua
- Abstract
Vision transformers are widely used in hyperspectral image (HSI) classification, with their core feature extractor being self-attention. Self-attention has a wider receptive field than convolution. However, existing vision transformers for the classification of HSIs with a large number of bands generally suffer from high computational complexity and a large number of parameter requirements. In this article, we propose an ultralightweight feature-compressed multihead self-attention learning network (UFMS-LN), which mainly consists of a novel compressed feature multihead self-attention (CF-MHSA), a spatial feature enhancement-enhancing transformation reduction (SFE-ETR), and a spatial-spectral hybridization-receptive field attention convolutional operation (SH-RFAConv). By effectively compressing feature maps in spatial-spectral dimensions, CF-MHSA achieves the same feature extraction capabilities as state-of-the-art self-attention mechanisms, and its floating-point operations (FLOPs) and parameters are two orders of magnitude lower than state-of-the-art self-attention mechanisms. SH-RFAConv is designed to emphasize local features, which have the ability to extract both spatial-spectral features simultaneously and have a wider receptive field than traditional convolutional operations. Furthermore, SFE-ETR is a preprocessing module for UFMS-LN that combines global spatial feature enhancement methods with enhancing transformation reduction (ETR). Extensive experiments conducted on four benchmark HSI datasets have shown that this method achieves superior results compared to existing state-of-the-art HSI classification networks.
- Published
- 2024
- Full Text
- View/download PDF
4. Quantitative Inversion of Oil Film Thickness Based on Airborne Hyperspectral Data Using the 1DCNN_GRU Model
- Author
-
Wang, Meiqi, Yang, Junfang, Liu, Shanwei, Gu, Yanfeng, Xu, Mingming, Ma, Yi, Zhang, Jie, and Wan, Jianhua
- Abstract
Oil film thickness (OFT) is an important indicator for estimating the amount of oil spill, and accurately quantifying the OFT is of great significance for loss assessment. In this article, hyperspectral images (HSIs) of different OFTs (0.01–3.04 mm) through a ground experiment were obtained, and the spectral characteristics were analyzed. To address the issue of poor spectral separability for different OFTs, the 1-D convolutional neural network_gate recurrent unit (1DCNN_GRU) model was developed for the quantitative inversion of OFT. It was validated through experiments on airborne Cubert-S185 HSI. The experimental results indicated that: 1) the proposed 1DCNN_GRU model effectively addressed the issue of reduced quantitative inversion accuracy resulting from poor spectral separability. The inversion results of it outperformed those of the support vector regression (SVR), convolutional neural network (CNN), and gate recurrent unit (GRU) models. Moreover, the optimal time for hyperspectral sensor to monitor OFT was at noon. 2) The proposed model using airborne hyperspectral data exhibited excellent inversion performance for OFT greater than 0.07 mm, especially with the best performance in 0.60–0.90 mm. 3) The accuracy of HSI-based OFT inversion assisted by brightness temperature (BT) data was superior to that of OFT inversion using single-source data. In particular, the proposed model had advantages in the feature level and decision level inversion of OFT in the ranges of 0.01–0.30 and 1.00–3.04 mm, respectively. This research provides technical support for the detection of OFT.
- Published
- 2023
- Full Text
- View/download PDF
5. UST-Net: A U-Shaped Transformer Network Using Shifted Windows for Hyperspectral Unmixing
- Author
-
Yang, Zhiru, Xu, Mingming, Liu, Shanwei, Sheng, Hui, and Wan, Jianhua
- Abstract
Autoencoders (AEs) are commonly utilized for acquiring low-dimensional data representations and performing data reconstruction, which makes them suitable for hyperspectral unmixing (HU). However, AE networks trained pixel by pixel and those employing localized convolutional filters disregard the global material distribution and distant interdependencies, resulting in the loss of necessary spatial feature information essential for the unmixing process. To overcome this limitation, we propose an innovative deep neural network model named U-shaped transformer network using shifted windows (UST-Net). UST-Net prioritizes spatial information in the scene that is more discriminative and significant by using multihead self-attention blocks based on shifted windows. Unlike patch-based unmixing networks, UST-Net operates on the complete image, eliminating inconsistencies associated with patches. Moreover, the downsampling and upsampling stages are used to extract hyperspectral image (HSI) feature maps at different scales. This process generates a context-rich and spatially accurate abundance map without losing local details. The experimental results of one synthetic dataset and three real datasets demonstrate that UST-Net significantly outperforms both traditional and several other advanced neural network methods. Our code is publicly available at
https://github.com/UPCGIT/UST-Net .- Published
- 2023
- Full Text
- View/download PDF
6. BAMS-FE: Band-by-Band Adaptive Multiscale Superpixel Feature Extraction for Hyperspectral Image Classification
- Author
-
Li, Jianmeng, Sheng, Hui, Xu, Mingming, Liu, Shanwei, and Zeng, Zhe
- Abstract
Superpixel segmentation has emerged as a prominent approach for simultaneous extraction of spatial–spectral features in hyperspectral imagery, exhibiting considerable efficacy in this domain. Although effective in spatial spectrum feature extraction, the existing feature extraction algorithms typically perform superpixel segmentation on a single band, failing to utilize the rich spectral and spatial information available across more bands. Moreover, current superpixel feature extraction methods lack scientific guidance for determining optimal multiscale parameters, which can lead to suboptimal segmentation and increased the complexity of hyperspectral analysis. To overcome these limitations, this article presents a novel band-by-band adaptive multiscale superpixel feature extraction (BAMS-FE) method. The method comprises of two key components: a band-by-band superpixel-based feature extraction method and an adaptive optimal superpixel multiscale determination method. First, the band-by-band superpixel-based feature extraction method performs superpixel segmentation for each band of hyperspectral images (HSIs), thereby extracting joint spatial and spectral features. Second, the adaptive optimal superpixel multiscale determination method uses an unsupervised approach to determine the optimal multiscale superpixel segmentation parameters. Finally, the BAMS algorithm is obtained by combining the above two algorithms. The proposed algorithm is evaluated on five different datasets, and the results demonstrate its excellent precision and stability. With the top 99% principal components post-PCA transformation or with raw, unprocessed hyperspectral datasets, stable and satisfactory classification performance is achieved by BAMS. Additionally, we compared its performance with several other state-of-the-art algorithms and found that it outperformed them in terms of accuracy. Our code is publicly available at
https://github.com/UPCGIT/BAMS-FE .- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.