36 results on '"Duan, Puhong"'
Search Results
2. MOFA: A novel dataset for Multi-modal Image Fusion Applications
- Author
-
Xiao, Kaihua, Kang, Xudong, Liu, Haibo, and Duan, Puhong
- Published
- 2023
- Full Text
- View/download PDF
3. Self-supervised learning-based oil spill detection of hyperspectral images
- Author
-
Duan, PuHong, Xie, ZhuoJun, Kang, XuDong, and Li, ShuTao
- Published
- 2022
- Full Text
- View/download PDF
4. Attribute filter based infrared and visible image fusion
- Author
-
Mo, Yan, Kang, Xudong, Duan, Puhong, Sun, Bin, and Li, Shutao
- Published
- 2021
- Full Text
- View/download PDF
5. EUAVDet: An Efficient and Lightweight Object Detector for UAV Aerial Images with an Edge-Based Computing Platform.
- Author
-
Wu, Wanneng, Liu, Ao, Hu, Jianwen, Mo, Yan, Xiang, Shao, Duan, Puhong, and Liang, Qiaokang
- Published
- 2024
- Full Text
- View/download PDF
6. Texture-aware total variation-based removal of sun glint in hyperspectral images
- Author
-
Duan, Puhong, Lai, Jibao, Kang, Jian, Kang, Xudong, Ghamisi, Pedram, and Li, Shutao
- Published
- 2020
- Full Text
- View/download PDF
7. Hyperspectral image visualization with edge-preserving filtering and principal component analysis
- Author
-
Kang, Xudong, Duan, Puhong, and Li, Shutao
- Published
- 2020
- Full Text
- View/download PDF
8. Hair cluster detection model based on dermoscopic images.
- Author
-
Xiong, Ya, Yu, Kun, Lan, Yujie, Lei, Zeyuan, Fan, Dongli, Li, Huafeng, Duan, Puhong, and Wang, Youlin
- Subjects
OBJECT recognition (Computer vision) ,DERMOSCOPY ,HAIR ,DERMATOMYOSITIS ,BALDNESS ,POLYMYOSITIS ,TREATMENT effectiveness - Abstract
Introduction: Hair loss has always bothered many people, with numerous individuals potentially facing the issue of sparse hair. Methods: Due to a scarcity of accurate research on detecting sparse hair, this paper proposes a sparse hair cluster detection model based on improved object detection neural network and medical images of sparse hair under dermatoscope to optimize the evaluation of treatment outcomes for hair loss patients. A new Multi-Level Feature Fusion Module is designed to extract and fuse features at different levels. Additionally, a new Channel-Space Dual Attention Module is proposed to consider both channel and spatial dimensions simultaneously, thereby further enhancing the model's representational capacity and the precision of sparse hair cluster detection. Results: After testing on self-annotated data, the proposed method is proven capable of accurately identifying and counting sparse hair clusters, surpassing existing methods in terms of accuracy and efficiency. Discussion: Therefore, it can work as an effective tool for early detection and treatment of sparse hair, and offer greater convenience for medical professionals in diagnosis and treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Edge-Guided Hyperspectral Change Detection.
- Author
-
Lu, Xukun, Duan, Puhong, Kang, Xudong, and Deng, Bin
- Abstract
Hyperspectral change detection (HCD) is widely applied in various domains, such as accurate agriculture, disaster assessment, land use, and environmental monitoring. Most HCD methods aim at extracting and classifying the spectral variation features with dimension reduction and machine-learning methods. Different from previous work, this letter proposes an edge-guided HCD method. Specifically, a subtraction operation is adopted to extract difference hyperspectral image (HSI). Then, the edge-preserving filtering is performed on the difference HSI to extract spectral–spatial features. Next, the number of the extracted features is diminished through the kernel principal component analysis (PCA). Finally, the fused features are input into a spectral classifier followed by the edge-preserving filtering to obtain the final change detection result. Experiments on several HCD datasets demonstrate that the proposed method can consistently outperform other advanced approaches in both subjective and objective evaluations when only a limited number of labeled samples are available. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Multi-Scale Superpixel-Guided Structural Profiles for Hyperspectral Image Classification.
- Author
-
Wang, Nanlan, Zeng, Xiaoyong, Duan, Yanjun, Deng, Bin, Mo, Yan, Xie, Zhuojun, and Duan, Puhong
- Subjects
HYPERSPECTRAL imaging systems ,FEATURE selection ,DEEP learning ,REMOTE sensing ,CLASSIFICATION - Abstract
Hyperspectral image classification has received a lot of attention in the remote sensing field. However, most classification methods require a large number of training samples to obtain satisfactory performance. In real applications, it is difficult for users to label sufficient samples. To overcome this problem, in this work, a novel multi-scale superpixel-guided structural profile method is proposed for the classification of hyperspectral images. First, the spectral number (of the original image) is reduced with an averaging fusion method. Then, multi-scale structural profiles are extracted with the help of the superpixel segmentation method. Finally, the extracted multi-scale structural profiles are fused with an unsupervised feature selection method followed by a spectral classifier to obtain classification results. Experiments on several hyperspectral datasets verify that the proposed method can produce outstanding classification effects in the case of limited samples compared to other advanced classification methods. The classification accuracies obtained by the proposed method on the Salinas dataset are increased by 43.25%, 31.34%, and 46.82% in terms of overall accuracy (OA), average accuracy (AA), and Kappa coefficient compared to recently proposed deep learning methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Multilayer Degradation Representation-Guided Blind Super-Resolution for Remote Sensing Images.
- Author
-
Kang, Xudong, Li, Jier, Duan, Puhong, Ma, Fuyan, and Li, Shutao
- Subjects
REMOTE sensing ,HIGH resolution imaging ,IMAGE reconstruction ,FEATURE extraction - Abstract
Remote sensing image super-resolution (SR) aims to boost the image resolution while recovering rich high-frequency details. Currently, most of the SR methods are based on an assumption that the degradation kernel is a specific downsampler. However, the degradation kernel is unknown and sophisticated for real remote sensing scenes, leading to a severe performance drop. To alleviate this problem, we propose a multilayer degradation representation-guided blind SR method for remote sensing images, which mainly consists of three key steps. First, an unsupervised representation learning is exploited to learn the degradation representation from low-resolution images. Then, a degradation-guided deep residual module is designed to model high-order features across different scales from the original images. Finally, a multilayer degradation-aware feature fusion mechanism is proposed to restore the finer details. Experiments on synthetic and real datasets demonstrate that the proposed method can achieve promising performance with respect to other state-of-the-art SR approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Multi-View Structural Feature Extraction for Hyperspectral Image Classification.
- Author
-
Liang, Nannan, Duan, Puhong, Xu, Haifeng, and Cui, Lin
- Subjects
- *
FEATURE extraction , *REMOTE sensing , *CLASSIFICATION - Abstract
The hyperspectral feature extraction technique is one of the most popular topics in the remote sensing community. However, most hyperspectral feature extraction methods are based on region-based local information descriptors while neglecting the correlation and dependencies of different homogeneous regions. To alleviate this issue, this paper proposes a multi-view structural feature extraction method to furnish a complete characterization for spectral–spatial structures of different objects, which mainly is made up of the following key steps. First, the spectral number of the original image is reduced with the minimum noise fraction (MNF) method, and a relative total variation is exploited to extract the local structural feature from the dimension reduced data. Then, with the help of a superpixel segmentation technique, the nonlocal structural features from intra-view and inter-view are constructed by considering the intra- and inter-similarities of superpixels. Finally, the local and nonlocal structural features are merged together to form the final image features for classification. Experiments on several real hyperspectral datasets indicate that the proposed method outperforms other state-of-the-art classification methods in terms of visual performance and objective results, especially when the number of training set is limited. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Data Science in Economics
- Author
-
Nosratabadi, Saeed, Mosavi, Amir, Duan, Puhong, and Ghamisi, Pedram
- Subjects
FOS: Economics and business ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,General Finance (q-fin.GN) ,Quantitative Finance - General Finance ,Machine Learning (cs.LG) ,68T05 - Abstract
This paper provides the state of the art of data science in economics. Through a novel taxonomy of applications and methods advances in data science are investigated. The data science advances are investigated in three individual classes of deep learning models, ensemble models, and hybrid models. Application domains include stock market, marketing, E-commerce, corporate banking, and cryptocurrency. Prisma method, a systematic literature review methodology is used to ensure the quality of the survey. The findings revealed that the trends are on advancement of hybrid models as more than 51% of the reviewed articles applied hybrid model. On the other hand, it is found that based on the RMSE accuracy metric, hybrid models had higher prediction accuracy than other algorithms. While it is expected the trends go toward the advancements of deep learning models., 22pages, 4 figures, 9 tables
- Published
- 2020
14. Multilayer Global Spectral–Spatial Attention Network for Wetland Hyperspectral Image Classification.
- Author
-
Xie, Zhuojun, Hu, Jianwen, Kang, Xudong, Duan, Puhong, and Li, Shutao
- Subjects
COASTAL wetlands ,WETLANDS monitoring ,WETLANDS ,CONVOLUTIONAL neural networks ,COASTAL mapping ,WETLAND soils ,ECOSYSTEMS ,RESTORATION ecology - Abstract
Coastal wetland monitoring plays an important role in the protection and restoration of ecosystems in this world. UAV-hyperspectral imaging, as an emerging technique for Earth observation and space exploration, provides the huge potential ability to identify different wetland species. In this work, a multilayer global spectral–spatial attention network (MGSSAN) is proposed for mapping coastal wetlands, which mainly consists of two major steps. First, a two-branch convolutional neural network (CNN) framework with residual connection is developed to obtain an initial classification probability map, in which one branch is used to capture the spectral information, the other branch is used to extract spatial information, and a global spectral–spatial attention module is designed to guide networks focusing on those features that are more discriminative. Second, an extended random walker method is utilized to optimize the initial classification probabilities, so as to yield the final map. Experiments performed on three wetland HSI datasets created by ourselves verify that the proposed method can obtain superior performance with respect to several state-of-the-art hyperspectral image classification methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Robust Normalized Softmax Loss for Deep Metric Learning-Based Characterization of Remote Sensing Images With Label Noise.
- Author
-
Kang, Jian, Fernandez-Beltran, Ruben, Duan, Puhong, Kang, Xudong, and Plaza, Antonio J.
- Subjects
REMOTE sensing ,WIRELESS geolocation systems ,NOISE ,LOGARITHMIC functions ,DEEP learning ,INFORMATION modeling - Abstract
Most deep metric learning-based image characterization methods exploit supervised information to model the semantic relations among the remote sensing (RS) scenes. Nonetheless, the unprecedented availability of large-scale RS data makes the annotation of such images very challenging, requiring automated supportive processes. Whether the annotation is assisted by aggregation or crowd-sourcing, the RS large-variance problem, together with other important factors [e.g., geo-location/registration errors, land-cover changes, even low-quality Volunteered Geographic Information (VGI), etc.] often introduce the so-called label noise, i.e., semantic annotation errors. In this article, we first investigate the deep metric learning-based characterization of RS images with label noise and propose a novel loss formulation, named robust normalized softmax loss (RNSL), for robustly learning the metrics among RS scenes. Specifically, our RNSL improves the robustness of the normalized softmax loss (NSL), commonly utilized for deep metric learning, by replacing its logarithmic function with the negative Box–Cox transformation in order to down-weight the contributions from noisy images on the learning of the corresponding class prototypes. Moreover, by truncating the loss with a certain threshold, we also propose a truncated robust normalized softmax loss (t-RNSL) which can further enforce the learning of class prototypes based on the image features with high similarities between them, so that the intraclass features can be well grouped and interclass features can be well separated. Our experiments, conducted on two benchmark RS data sets, validate the effectiveness of the proposed approach with respect to different state-of-the-art methods in three different downstream applications (classification, clustering, and retrieval). The codes of this article will be publicly available from https://github.com/jiankang1991. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Deep Unsupervised Embedding for Remotely Sensed Images Based on Spatially Augmented Momentum Contrast.
- Author
-
Kang, Jian, Fernandez-Beltran, Ruben, Duan, Puhong, Liu, Sicong, and Plaza, Antonio J.
- Subjects
DEEP learning ,SUPERVISED learning ,CONVOLUTIONAL neural networks ,REMOTE sensing ,LAND cover ,REED-Solomon codes - Abstract
Convolutional neural networks (CNNs) have achieved great success when characterizing remote sensing (RS) images. However, the lack of sufficient annotated data (together with the high complexity of the RS image domain) often makes supervised and transfer learning schemes limited from an operational perspective. Despite the fact that unsupervised methods can potentially relieve these limitations, they are frequently unable to effectively exploit relevant prior knowledge about the RS domain, which may eventually constrain their final performance. In order to address these challenges, this article presents a new unsupervised deep metric learning model, called spatially augmented momentum contrast (SauMoCo), which has been specially designed to characterize unlabeled RS scenes. Based on the first law of geography, the proposed approach defines spatial augmentation criteria to uncover semantic relationships among land cover tiles. Then, a queue of deep embeddings is constructed to enhance the semantic variety of RS tiles within the considered contrastive learning process, where an auxiliary CNN model serves as an updating mechanism. Our experimental comparison, including different state-of-the-art techniques and benchmark RS image archives, reveals that the proposed approach obtains remarkable performance gains when characterizing unlabeled scenes since it is able to substantially enhance the discrimination ability among complex land cover categories. The source codes of this article will be made available to the RS community for reproducible research. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Electromagnetic Induction Heating and Image Fusion of Silicon Photovoltaic Cell Electrothermography and Electroluminescence.
- Author
-
Yang, Ruizhen, Du, Bolun, Duan, Puhong, He, Yunze, Wang, Hongjin, He, Yigang, and Zhang, Kai
- Abstract
In the process of research, development, production, service, and maintenance of silicon photovoltaic (Si-PV) cells and the requirements for detection technology are becoming more and more important. This paper aims to investigate electromagnetic induction (EMI) and image fusion to improve the detection effect of electrothermography (ET) and electroluminescence (EL) of multidefects in Si-PV cells. First, the principles of ET, EL, and other physical processes including EMI, thermal radiation, and luminescence radiation are analyzed in this paper. ET and EL techniques after EMI improvement are used to detect different defects including scratch, broken gridline, surface impurity, hidden crack, and so on. The qualitative results show that EMI can greatly improve the defect detection ability of ET and EL. Then, an image-fusion rule based on L1 norm is proposed to fuse the sparse vector of the ET and EL images. The integration and complementarity of the two wavelength detection data are achieved. Finally, the image-fusion results of sparse representation (SR) algorithm is compared with discrete wavelet transform, curvelet transform, dual-tree complex wavelet transforms, and nonsubsampled contourlet transform. Five objective evaluation indexes including root mean square error, peak signal-to-noise ratio, correlation coefficient, mutual information, and structural similarity index are used to evaluate the fusion results. Overall evaluation results show that the SR algorithm is superior to the other algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Multichannel Pulse-Coupled Neural Network-Based Hyperspectral Image Visualization.
- Author
-
Duan, Puhong, Kang, Xudong, Li, Shutao, and Ghamisi, Pedram
- Subjects
- *
VISUALIZATION , *IMAGE fusion , *IMAGE color analysis , *LAND cover , *IMAGE analysis - Abstract
Hyperspectral Image (HSI) visualization, which aims at displaying as much material information of original images as possible on a trichromatic monitor with natural color, plays an important role in image interpretation and analysis. However, most of the HSI visualization methods only focus on presenting the detail information of a scene without providing natural colors and distinguishing land covers with similar colors. In order to address this problem, this article proposes a multichannel pulse-coupled neural network (MPCNN)-based HSI visualization method, which consists of the following steps. First, the MPCNN is proposed and explored to fuse the original HSI so as to obtain a fused band with rich spatial details. Then, a color mapping scheme is proposed to determine the weights of red, green, and blue (RGB) channels. Finally, the weighted RGB channels are stacked together for visualization. Experiments performed on four hyperspectral data sets demonstrate that the proposed method not only displays the HSI with nature colors but also improves the details in the image. The effectiveness of the proposed method is demonstrated in terms of both visual effect and objective indexes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
19. Hyperspectral Anomaly Detection With Kernel Isolation Forest.
- Author
-
Li, Shutao, Zhang, Kunzhong, Duan, Puhong, and Kang, Xudong
- Subjects
ANOMALY detection (Computer security) ,DATA mapping - Abstract
In this article, a novel hyperspectral anomaly detection method with kernel Isolation Forest (iForest) is proposed. The method is based on an assumption that anomalies rather than background can be more susceptible to isolation in the kernel space. Based on this idea, the proposed method detects anomalies as follows. First, the hyperspectral data are mapped into the kernel space, and the first $K$ principal components are used. Then, the isolation samples in the image are detected with the iForest constructed using randomly selected samples in the principal components. Finally, the initial anomaly detection map is iteratively refined with locally constructed iForest in connected regions with large areas. Experimental results on several real hyperspectral data sets demonstrate that the proposed method outperforms other state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. Fusion of Multiple Edge-Preserving Operations for Hyperspectral Image Classification.
- Author
-
Duan, Puhong, Kang, Xudong, Li, Shutao, Ghamisi, Pedram, and Benediktsson, Jon Atli
- Subjects
- *
SUPPORT vector machines , *PLURALITY voting , *MULTISENSOR data fusion - Abstract
In this article, a novel hyperspectral image (HSI) classification method based on fusing multiple edge-preserving operations (EPOs) is proposed, which consists of the following steps. First, the edge-preserving features are obtained by performing different types of EPOs, i.e., local edge-preserving filtering and global edge-preserving smoothing on the dimension-reduced HSI. Then, with the assistance of a superpixel segmentation method, the edge-preserving features are further improved by considering the inter and intra spectral properties of superpixels. Finally, the spectral and edge-preserving features are fused to form one composite kernel, which is fed into the support vector machine (SVM) followed by a majority voting fusion scheme. Experimental results on three data sets demonstrate the superiority of the proposed method over several state-of-the-art classification approaches, especially when the training sample size is limited. Furthermore, 21 well-known methods, including mathematical morphology-based approaches, sparse representation models, and deep learning-based classifiers, are adopted to be compared with the proposed method on Houston data set with standard sets of training and test samples released during 2013 Data Fusion Contest, which also shows the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. Noise-Robust Hyperspectral Image Classification via Multi-Scale Total Variation.
- Author
-
Duan, Puhong, Kang, Xudong, Li, Shutao, and Ghamisi, Pedram
- Abstract
In this paper, a novel multi-scale total variation method is proposed to extract structural features from hyperspectral images (HSIs), which consists of the following steps. First, the spectral dimension of the HSI is reduced with an averaging-based method. Then, the multi-scale structural features (MSFs), which are insensitive to image noise, are constructed with a relative total variation-based structure extraction technique. Finally, the MSFs are fused together using kernel principal component analysis (KPCA), so as to obtain the KPCA-fused MSFs for classification. Experimental results on three publicly available hyperspectral datasets, including both well-known, long-used data, and a recent dataset obtained from an international contest, demonstrate the competitive performance over several state-of-the-art classification approaches in this field. Moreover, the robustness of the proposed method to the small-sample-size problem and serious image noise is also demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Semi-supervised deep learning for hyperspectral image classification.
- Author
-
Kang, Xudong, Zhuo, Binbin, and Duan, Puhong
- Subjects
HYPERSPECTRAL imaging systems ,REMOTE sensing ,ARTIFICIAL neural networks ,MACHINE learning ,REMOTE-sensing images - Abstract
Recently, a series of deep learning methods based on the convolutional neural networks (CNNs) have been introduced for classification of hyperspectral images (HSIs). However, in order to obtain the optimal parameters, a large number of training samples are required in the CNNs to avoid the overfitting problem. In this paper, a novel method is proposed to extend the training set for deep learning based hyperspectral image classification. First, given a small-sample-size training set, the principal component analysis based edge-preserving features (PCA-EPFs) and extended morphological attribute profiles (EMAPs) are used for HSI classification so as to generate classification probability maps. Second, a large number of pseudo training samples are obtained by the designed decision function which depends on the classification probabilities. Finally, a deep feature fusion network (DFFN) is applied to classify HSI with the training set consists of the original small-sample-size training set and the added pseudo training samples. Experiments performed on several hyperspectral data sets demonstrate the state-of-the-art performance of the proposed method in terms of classification accuracies. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Feature extraction from hyperspectral images using learned edge structures.
- Author
-
Zhang, Ying, Kang, Xudong, Li, Shutao, Duan, Puhong, and Benediktsson, Jón Atli
- Subjects
HYPERSPECTRAL imaging systems ,SUPPORT vector machines ,RANDOM forest algorithms ,LAND cover ,IMAGE processing - Abstract
In this letter, a novel edge-preserving filtering based approach is proposed for feature extraction of hyperspectral images, which consists of the following steps. First, the dimension of the hyperspectral image is reduced with an averaging based method. Then, the resulting features are obtained by performing edge-preserving filtering on the dimension reduced image, in which a learned edge detection map serves as one of the major cues in the filtering process. The advantage of the proposed method is that it makes full use of the learned edge information in the feature extraction process, and thus, able to improve the performance with respect to other traditional feature extraction methods. Experiments on two real hyperspectral data sets demonstrate the outstanding performance of the proposed method especially when the number of training samples is limited. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Dual-Path Network-Based Hyperspectral Image Classification.
- Author
-
Kang, Xudong, Zhuo, Binbin, and Duan, Puhong
- Abstract
Recently, convolutional neural networks (CNNs) as a powerful tool have been introduced for classification of hyperspectral images (HSIs). However, it fails to take the feature redundancy into consideration. Hence, for the pixel-wise HSI classification, the CNN-based methods may not effectively extract the discriminative features from the complex scene in HSIs. In order to overcome this problem, in this letter, a novel dual-path network (DPN)-based HSI classification method is proposed, in which the DPN combines the advantages of the residual network and dense convolutional network. First, the principal component analysis is utilized to extract significant components of HSI. Second, training image patches centered on labeled pixels are constructed to train the DPN. Finally, the labels of test pixels are predicted by using the trained network. Experiments conducted on two hyperspectral data sets demonstrate the state-of-the-art performance of the proposed method over other compared methods in terms of classification accuracies. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. Detection and Correction of Mislabeled Training Samples for Hyperspectral Image Classification.
- Author
-
Kang, Xudong, Duan, Puhong, Xiang, Xuanlin, Li, Shutao, and Benediktsson, Jon Atli
- Subjects
- *
INFORMATION filtering , *HYPERSPECTRAL imaging systems , *IMAGING systems , *IMAGE , *TECHNOLOGY - Abstract
In this paper, a novel method is introduced to detect and correct mislabeled training samples for hyperspectral image classification. First, domain transform recursive filtering-based feature extraction is used to improve the separability of the training samples. Then, constrained energy minimization-based object detection is performed on the training set with each training sample serving as the object spectrum. Finally, the label of each training sample is verified or corrected based on the averaged detection probabilities of different classes. Experiments performed on real hyperspectral data sets demonstrate the effectiveness of the proposed method in improving classification performance with respect to the classifier trained with the original training set that contains a number of mislabeled samples. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
26. Hyperspectral Anomaly Detection With Multiscale Attribute and Edge-Preserving Filters.
- Author
-
Li, Shutao, Zhang, Kunzhong, Hao, Qiaobo, Duan, Puhong, and Kang, Xudong
- Abstract
In this letter, a novel anomaly detection method is proposed, which can effectively fuse the multiscale information extracted by attribute and edge-preserving filters. The proposed method consists of the following steps. First, multiscale attribute and edge-preserving filters are utilized to obtain multiscale anomaly detection maps. Then, the multiscale detection maps are fused via an averaging approach, and the training samples of the anomalies and background are selected from the fused detection map. Next, the support vector machine classification is performed on the hyperspectral image to obtain an anomaly probability map. Finally, the detection result is obtained by multiplying the fused detection map and the anomaly probability map, followed by an edge-preserving filtering-based postprocessing. Experiments performed on four real hyperspectral data sets demonstrate that the proposed method shows a better detection performance with respect to several state-of-the-art hyperspectral anomaly detection methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. Decolorization-Based Hyperspectral Image Visualization.
- Author
-
Kang, Xudong, Duan, Puhong, Li, Shutao, and Benediktsson, Jon Atli
- Subjects
- *
IMAGE , *HYPERSPECTRAL imaging systems , *VISUALIZATION , *ENERGY bands , *EXPERIMENTS - Abstract
Image decolorization is known to be an effective way in transferring a color image into a gray one while well preserving the major information of all three bands. In this paper, a simple yet effective hyperspectral image visualization framework based on decolorization, named decolorization based hyperspectral visualization, is proposed, which enables us to fully exploit the benefits of decolorization technique. The proposed framework consists of the following two main steps. First, the hyperspectral image is partitioned into nine subsets of adjacent hyperspectral bands and the averaged band of each subset is calculated. Then, the dimension reduced image is further divided into three groups of adjacent bands, and the bands in each group are fused by using an image decolorization method. The main contribution of this paper is that the strong correlations in two different fields, i.e., image decolorization and hyperspectral image visualization, are first built. Experiments performed on several real hyperspectral data sets demonstrate that the proposed framework can obtain outstanding visualization performance in terms of both subjective and objective evaluations. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
28. A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation.
- Author
-
Yin, Ming, Duan, Puhong, Liu, Wei, and Liang, Xiangyu
- Subjects
- *
IMAGE representation , *INFRARED imaging , *IMAGE fusion , *INVARIANTS (Mathematics) , *COEFFICIENTS (Statistics) , *IMAGE quality analysis , *ARTIFICIAL neural networks - Abstract
In this paper, a novel shift-invariant dual-tree complex shearlet transform (SIDCST) is constructed and applied to infrared and visible image fusion. Firstly, the mathematical morphology is used for the source images. Then, the images are decomposed by SIDCST to obtain the low frequency sub-band coefficients and high frequency sub-band coefficients. For the low frequency sub-band coefficients, a novel sparse representation (SR)-based fusion rule is presented. For the high frequency sub-band coefficients, a scheme based on the theory of adaptive dual-channel pulse coupled neural network (2APCNN) is presented, and the energy of edge is used for the external input of 2APCNN. Finally, the fused image is obtained by performing the inverse SIDCST. The experimental results show that the proposed approach can obtain state-of-the-art performance compared with conventional image fusion methods in terms of both objective evaluation criteria and visual quality. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
29. Multilevel Structure Extraction-Based Multi-Sensor Data Fusion.
- Author
-
Duan, Puhong, Kang, Xudong, Ghamisi, Pedram, and Liu, Yu
- Subjects
- *
MULTISENSOR data fusion , *OPTICAL radar , *SYNTHETIC aperture radar , *LIDAR - Abstract
Multi-sensor data on the same area provide complementary information, which is helpful for improving the discrimination capability of classifiers. In this work, a novel multilevel structure extraction method is proposed to fuse multi-sensor data. This method is comprised of three steps: First, multilevel structure extraction is constructed by cascading morphological profiles and structure features, and is utilized to extract spatial information from multiple original images. Then, a low-rank model is adopted to integrate the extracted spatial information. Finally, a spectral classifier is employed to calculate class probabilities, and a maximum posteriori estimation model is used to decide the final labels. Experiments tested on three datasets including rural and urban scenes validate that the proposed approach can produce promising performance with regard to both subjective and objective qualities. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Data Science in Economics: Comprehensive Review of Advanced Machine Learning and Deep Learning Methods.
- Author
-
Nosratabadi, Saeed, Mosavi, Amirhosein, Duan, Puhong, Ghamisi, Pedram, Filip, Ferdinand, Band, Shahab S., Reuter, Uwe, Gama, Joao, and Gandomi, Amir H.
- Subjects
DEEP learning ,MACHINE learning ,DATA science ,BLENDED learning ,ECONOMIC research ,CRYPTOCURRENCIES ,ECONOMIC databases - Abstract
This paper provides a comprehensive state-of-the-art investigation of the recent advances in data science in emerging economic applications. The analysis is performed on the novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a broad and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, is used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which outperform other learning algorithms. It is further expected that the trends will converge toward the evolution of sophisticated hybrid deep learning models. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics.
- Author
-
Mosavi, Amirhosein, Faghan, Yaser, Ghamisi, Pedram, Duan, Puhong, Ardabili, Sina Faizollahzadeh, Salwana, Ely, and Band, Shahab S.
- Subjects
REINFORCEMENT learning ,DEEP learning ,MATHEMATICAL economics ,ARTIFICIAL intelligence ,SCALABILITY ,DYNAMICAL systems - Abstract
The popularity of deep reinforcement learning (DRL) applications in economics has increased exponentially. DRL, through a wide range of capabilities from reinforcement learning (RL) to deep learning (DL), offers vast opportunities for handling sophisticated dynamic economics systems. DRL is characterized by scalability with the potential to be applied to high-dimensional problems in conjunction with noisy and nonlinear patterns of economic data. In this paper, we initially consider a brief review of DL, RL, and deep RL methods in diverse applications in economics, providing an in-depth insight into the state-of-the-art. Furthermore, the architecture of DRL applied to economic applications is investigated in order to highlight the complexity, robustness, accuracy, performance, computational tasks, risk constraints, and profitability. The survey results indicate that DRL can provide better performance and higher efficiency as compared to the traditional algorithms while facing real economic problems in the presence of risk parameters and the ever-increasing uncertainties. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. Component Decomposition-Based Hyperspectral Resolution Enhancement for Mineral Mapping.
- Author
-
Duan, Puhong, Lai, Jibao, Ghamisi, Pedram, Kang, Xudong, Jackisch, Robert, Kang, Jian, and Gloaguen, Richard
- Subjects
- *
IMAGE intensifiers , *MINERALOGY , *REFLECTANCE - Abstract
Combining both spectral and spatial information with enhanced resolution provides not only elaborated qualitative information on surfacing mineralogy but also mineral interactions of abundance, mixture, and structure. This enhancement in the resolutions helps geomineralogic features such as small intrusions and mineralization become detectable. In this paper, we investigate the potential of the resolution enhancement of hyperspectral images (HSIs) with the guidance of RGB images for mineral mapping. In more detail, a novel resolution enhancement method is proposed based on component decomposition. Inspired by the principle of the intrinsic image decomposition (IID) model, the HSI is viewed as the combination of a reflectance component and an illumination component. Based on this idea, the proposed method is comprised of several steps. First, the RGB image is transformed into the luminance component, blue-difference and red-difference chroma components (YCbCr), and the luminance channel is considered as the illumination component of the HSI with an ideal high spatial resolution. Then, the reflectance component of the ideal HSI is estimated with the downsampled HSI image and the downsampled luminance channel. Finally, the HSI with high resolution can be reconstructed by utilizing the obtained illumination and the reflectance components. Experimental results verify that the fused results can successfully achieve mineral mapping, producing better results qualitatively and quantitatively over single sensor data. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
33. Feature Consistency-Based Prototype Network for Open-Set Hyperspectral Image Classification.
- Author
-
Xie Z, Duan P, Liu W, Kang X, Wei X, and Li S
- Abstract
Hyperspectral image (HSI) classification methods have made great progress in recent years. However, most of these methods are rooted in the closed-set assumption that the class distribution in the training and testing stages is consistent, which cannot handle the unknown class in open-world scenes. In this work, we propose a feature consistency-based prototype network (FCPN) for open-set HSI classification, which is composed of three steps. First, a three-layer convolutional network is designed to extract the discriminative features, where a contrastive clustering module is introduced to enhance the discrimination. Then, the extracted features are used to construct a scalable prototype set. Finally, a prototype-guided open-set module (POSM) is proposed to identify the known samples and unknown samples. Extensive experiments reveal that our method achieves remarkable classification performance over other state-of-the-art classification techniques.
- Published
- 2024
- Full Text
- View/download PDF
34. Click-Pixel Cognition Fusion Network With Balanced Cut for Interactive Image Segmentation.
- Author
-
Lin J, Xiao Z, Wei X, Duan P, He X, Dian R, Li Z, and Li S
- Abstract
Interactive image segmentation (IIS) has been widely used in various fields, such as medicine, industry, etc. However, some core issues, such as pixel imbalance, remain unresolved so far. Different from existing methods based on pre-processing or post-processing, we analyze the cause of pixel imbalance in depth from the two perspectives of pixel number and pixel difficulty. Based on this, a novel and unified Click-pixel Cognition Fusion network with Balanced Cut (CCF-BC) is proposed in this paper. On the one hand, the Click-pixel Cognition Fusion (CCF) module, inspired by the human cognition mechanism, is designed to increase the number of click-related pixels (namely, positive pixels) being correctly segmented, where the click and visual information are fully fused by using a progressive three-tier interaction strategy. On the other hand, a general loss, Balanced Normalized Focal Loss (BNFL), is proposed. Its core is to use a group of control coefficients related to sample gradients and forces the network to pay more attention to positive and hard-to-segment pixels during training. As a result, BNFL always tends to obtain a balanced cut of positive and negative samples in the decision space. Theoretical analysis shows that the commonly used Focal and BCE losses can be regarded as special cases of BNFL. Experiment results of five well-recognized datasets have shown the superiority of the proposed CCF-BC method compared to other state-of-the-art methods. The source code is publicly available at https://github.com/lab206/CCF-BC.
- Published
- 2024
- Full Text
- View/download PDF
35. SOSNet: Real-Time Small Object Segmentation via Hierarchical Decoding and Example Mining.
- Author
-
Liu W, Kang X, Duan P, Xie Z, Wei X, and Li S
- Abstract
Real-time semantic segmentation plays an important role in auto vehicles. However, most real-time small object segmentation methods fail to obtain satisfactory performance on small objects, such as cars and sign symbols, since the large objects usually tend to devote more to the segmentation result. To solve this issue, we propose an efficient and effective architecture, termed small objects segmentation network (SOSNet), to improve the segmentation performance of small objects. The SOSNet works from two perspectives: methodology and data. Specifically, with the former, we propose a dual-branch hierarchical decoder (DBHD) which is viewed as a small-object sensitive segmentation head. The DBHD consists of a top segmentation head that predicts whether the pixels belong to a small object class and a bottom one that estimates the pixel class. In this situation, the latent correlation among small objects can be fully explored. With the latter, we propose a small object example mining (SOEM) algorithm for balancing examples between small objects and large objects automatically. The core idea of the proposed SOEM is that most of the hard examples on small-object classes are reserved for training while most of the easy examples on large-object classes are banned. Experiments on three commonly used datasets show that the proposed SOSNet architecture greatly improves the accuracy compared to the existing real-time semantic segmentation methods while keeping efficiency. The code will be available at https://github.com/StuLiu/SOSNet.
- Published
- 2023
- Full Text
- View/download PDF
36. LRAF-Net: Long-Range Attention Fusion Network for Visible-Infrared Object Detection.
- Author
-
Fu H, Wang S, Duan P, Xiao C, Dian R, Li S, and Li Z
- Abstract
Visible-infrared object detection aims to improve the detector performance by fusing the complementarity of visible and infrared images. However, most existing methods only use local intramodality information to enhance the feature representation while ignoring the efficient latent interaction of long-range dependence between different modalities, which leads to unsatisfactory detection performance under complex scenes. To solve these problems, we propose a feature-enhanced long-range attention fusion network (LRAF-Net), which improves detection performance by fusing the long-range dependence of the enhanced visible and infrared features. First, a two-stream CSPDarknet53 network is used to extract the deep features from visible and infrared images, in which a novel data augmentation (DA) method is designed to reduce the bias toward a single modality through asymmetric complementary masks. Then, we propose a cross-feature enhancement (CFE) module to improve the intramodality feature representation by exploiting the discrepancy between visible and infrared images. Next, we propose a long-range dependence fusion (LDF) module to fuse the enhanced features by associating the positional encoding of multimodality features. Finally, the fused features are fed into a detection head to obtain the final detection results. Experiments on several public datasets, i.e., VEDAI, FLIR, and LLVIP, show that the proposed method obtains state-of-the-art performance compared with other methods.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.