7 results on '"Xiaodong Mu"'
Search Results
2. Hausdorff IoU and Context Maximum Selection NMS: Improving Object Detection in Remote Sensing Images With a Novel Metric and Postprocessing Module
- Author
-
Xiaodong Mu, Lizhi Wang, Jinjin Zhang, and Chenhui Ma
- Subjects
Hausdorff distance ,Intersection ,Computer science ,Metric (mathematics) ,Benchmark (computing) ,Context (language use) ,Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology ,Object (computer science) ,Convolutional neural network ,Object detection ,Remote sensing - Abstract
The object detectors based on deep convolution neural network have achieved significant success in the field of remote sensing images. Intersection over Union (IoU) and No-maximum suppression (NMS) are the essential components of state-of-the-art anchor-based object detectors. However, as a localization evaluation metric, IoU does not precisely match the boundary box regression, leading to inaccurate regression of the object detector. Therefore, we introduce Hausdorff distance and combine it with IoU as a new evaluation metric (HIoU). NMS is an integral part of the object detection pipeline. However, it may lose relatively small object information in the case of high overlap. Because of the denseness of objects, this defect is more prominent in remote sensing image object detection. Therefore, we consider the context information of location confidence and propose the context maximum selection NMS (Cms-NMS) algorithm. Finally, we integrate HIoU and Cms-NMS into state-of-the-art object detectors, respectively. The performance of these object detectors is improved on the benchmark datasets NWPUVHR-10 and RSOD without any additional hyperparameters. The experiments show that HIoU and Cms-NMS are compatible, and using them together can further improve the detectors' accuracy.
- Published
- 2022
- Full Text
- View/download PDF
3. Object Detection Based on Efficient Multiscale Auto-Inference in Remote Sensing Images
- Author
-
Guangjie Kou, Shaojing Zhang, Xiaodong Mu, and Jingyu Zhao
- Subjects
Artificial neural network ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Process (computing) ,Inference ,02 engineering and technology ,Geotechnical Engineering and Engineering Geology ,Convolutional neural network ,Object detection ,Data set ,Electrical and Electronic Engineering ,Image resolution ,021101 geological & geomatics engineering ,Remote sensing - Abstract
Object detection in remote sensing images has important applications in various aspects. Object detection algorithms with deep convolutional neural networks (DCNNs) have made remarkable progress. However, when processing objects on vastly multiple scales in high-resolution optical remote sensing images, there is a high computational cost. Therefore, to simplify neural network multiscale training and inference, an automatic multiscale inference framework is proposed to balance the speed and accuracy of object detection. We use an attention mechanism that uses a key-point network to predict regions with small objects on a coarse scale and only process regions obtained from the first stage on finer scales instead of processing an entire larger scale image. The fully convolutional neural network (CNN) that is used in training and detecting is not affected by the image input resolution. The experiments are carried out using the NWPUVHR-10 data set, and the experimental results show that these methods can improve the training efficiency and detection accuracy in remote sensing images.
- Published
- 2021
- Full Text
- View/download PDF
4. Multilayer Feature Fusion With Weight Adjustment Based on a Convolutional Neural Network for Remote Sensing Scene Classification
- Author
-
Renpu Lin, Shuyang Wang, Chenhui Ma, and Xiaodong Mu
- Subjects
Computer science ,Feature extraction ,0211 other engineering and technologies ,02 engineering and technology ,Geotechnical Engineering and Engineering Geology ,Convolutional neural network ,Set (abstract data type) ,Kernel (linear algebra) ,Feature (computer vision) ,Key (cryptography) ,Redundancy (engineering) ,Electrical and Electronic Engineering ,Representation (mathematics) ,021101 geological & geomatics engineering ,Remote sensing - Abstract
Remote sensing scene classification is still a challenging task. Extracting features effectively from restricted existing labeled data is key to scene classification. Convolutional neural networks (CNNs) are an effective method of constructing discriminating feature representation. However, CNNs usually utilize the feature map from the last layer and ignore additional layers with valuable feature information. In addition, the direct integration of multiple layers brings only a small improvement due to feature redundancy and destruction. To explore the potential information from additional layers and improve the effect of feature fusion, we propose multilayer feature fusion accesses with weight adjustment based on a CNN. We construct access to deliver additional features to one layer to achieve feature fusion and set weight factors to adjust the fusion degree to reduce feature redundancy and destruction. We perform experiments on two common data sets, which indicate improved accuracies and advantages of the extraction capability of our method.
- Published
- 2021
- Full Text
- View/download PDF
5. SAR Target Image Classification Based on Transfer Learning and Model Compression
- Author
-
Xiangchen He, Xiaodong Mu, Chengliang Zhong, Jiaxin Wang, and Ming Zhu
- Subjects
Synthetic aperture radar ,Speedup ,Contextual image classification ,business.industry ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Filter (signal processing) ,Overfitting ,Geotechnical Engineering and Engineering Geology ,Convolutional neural network ,Artificial intelligence ,Pruning (decision trees) ,Electrical and Electronic Engineering ,business ,021101 geological & geomatics engineering - Abstract
When convolutional neural networks (CNNs) are applied to the synthetic aperture radar (SAR) image classification, they are prone to overfitting due to scarce SAR image data, and CNNs require a large amount of storage and long computing time, so it is difficult to deploy them on resource constrained devices. This letter proposes a simple and feasible approach that can effectively solve these problems. First, the convolutional layers of the pretrained model on the ImageNet data set are transferred, and a new convolutional layer and global pooling layer are added afterward. Then, fine-tuning is performed on the new network from the SAR image data set. Finally, a filter-based pruning method is used on the convolutional layers to obtain a compact network. Compared with the all-convolutional network (A-ConvNets) which is the state-of-the-art method on the moving and stationary target acquisition and recognition data set, our method achieves about $3.6\times $ speedup during forward propagation and $3.7\times $ compression of the parameters, with only a 1.42% decrease in the accuracy.
- Published
- 2019
- Full Text
- View/download PDF
6. Classification for SAR Scene Matching Areas Based on Convolutional Neural Networks
- Author
-
Xiangchen He, Xiaodong Mu, Ben Niu, Bichao Zhan, and Chengliang Zhong
- Subjects
Synthetic aperture radar ,Matching (statistics) ,business.industry ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Geotechnical Engineering and Engineering Geology ,Grayscale ,Convolutional neural network ,Field (computer science) ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,Digital elevation model ,business ,021101 geological & geomatics engineering - Abstract
The selection of scene matching areas is a difficult problem in the field of matching guidance. Compared with the traditional methods of matching feature extraction and pattern classification, this letter applies convolutional neural networks (CNN) to the extraction of synthetic aperture radar (SAR) scene matching regions for the first time. First of all, we match the SAR images of the same land taken by satellites from different angles and in different phases, and then automatically label the matching suitability of the images as the output of the network according to the matching results. Next, the digital elevation model data reflecting the elevation information and the SAR image grayscale information are fused as the input to the network. Finally, CNN is used to automatically extract the matching features and classify the suitability of the SAR images. The proposed method avoids the steps of extracting features manually and improves the classification performance of SAR scene matching area. Compared with the support vector machine method, the classification accuracy increases from 86.1% to 93.3%.
- Published
- 2018
- Full Text
- View/download PDF
7. Adapting Remote Sensing to New Domain With ELM Parameter Transfer
- Author
-
Dong Chai, Suhui Xu, Shuyang Wang, and Xiaodong Mu
- Subjects
Linear programming ,Artificial neural network ,Computer science ,business.industry ,Feature extraction ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Geotechnical Engineering and Engineering Geology ,computer.software_genre ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,Classifier (UML) ,021101 geological & geomatics engineering ,Extreme learning machine ,Remote sensing - Abstract
It is time consuming to annotate unlabeled remote sensing images. One strategy is taking the labeled remote sensing images from another domain as training samples, and the target remote sensing labels are predicted by supervised classification. However, this may lead to negative transfer due to the distribution difference between the two domains. To address this issue, we propose a novel domain adaptation method through transferring the parameters of extreme learning machine (ELM). The core of this method is learning a transformation to map the target ELM parameters to the source, making the classifier parameters of the target domain maximally aligned with the source. Our method has several advantages which was previously unavailable within a single method: multiclass adaptation through parameter transferring, learning the final classifier and transformation simultaneously, and avoiding negative transfer. We perform experiments on three data sets that indicate improved accuracy and computational advantages compared to baseline approaches.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.