51 results on '"Shouda Jiang"'
Search Results
2. A Dual-input Fault Diagnosis Model Based on SE-MSCNN for Analog Circuits
- Author
-
Jingli Yang, Tianyu Gao, and Shouda Jiang
- Subjects
Artificial Intelligence - Published
- 2022
3. Energy-Based Adversarial Transfer Network for Cross-Domain Fault Diagnosis of Electro-Mechanical Systems
- Author
-
Jingli Yang, Tianyu Gao, and Shouda Jiang
- Subjects
Electrical and Electronic Engineering ,Instrumentation - Published
- 2022
4. A novel fault diagnosis method for analog circuits with noise immunity and generalization ability
- Author
-
Shouda Jiang, Tianyu Gao, and Jingli Yang
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Dimensionality reduction ,Feature extraction ,Pattern recognition ,02 engineering and technology ,Fault (power engineering) ,Linear discriminant analysis ,Autoencoder ,Support vector machine ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Noise (video) ,Artificial intelligence ,Cluster analysis ,business ,Feature learning ,Software - Abstract
To enhance the reliability of analog circuits in complex electrical systems, a novel fault diagnosis method is presented in this paper. A denoising autoencoder and a sparse autoencoder are combined, producing a feature extraction model named the denoising sparse deep autoencoder (DSDAE) that can obtain effective information from signals contaminated by noise. Compared with traditional feature extraction methods, the DSDAE model can be used to implement adaptive feature learning. Then, linear discriminant analysis is adopted to perform linear dimensionality reduction, thereby obtaining the maximum clustering features of the signals. Finally, a fault diagnosis model based on a support vector machine (SVM) with high versatility and accuracy is developed to identify the fault classes of analog circuits. In addition, the salp swarm algorithm, which is capable of convergence and strong global optimization, is employed to intelligently optimize the SVM classifier. The method is comprehensively evaluated with three typical analog circuits from the ISCAS’97 circuit set. The experimental results illustrate that the proposed fault diagnosis method can achieve excellent fault identification accuracy and generalization performance even under noise interference conditions.
- Published
- 2021
5. A Novel Incipient Fault Diagnosis Method for Analog Circuits Based on GMKL-SVM and Wavelet Fusion Features
- Author
-
Shouda Jiang, Tianyu Gao, and Jingli Yang
- Subjects
Computer science ,020208 electrical & electronic engineering ,Feature extraction ,Hardware_PERFORMANCEANDRELIABILITY ,02 engineering and technology ,Fault (power engineering) ,Wavelet packet decomposition ,Support vector machine ,Wavelet ,Band-pass filter ,Skewness ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Kernel Fisher discriminant analysis ,Instrumentation ,Algorithm ,Electronic filter ,Digital biquad filter - Abstract
To enhance the reliability of analog circuits in complex electrical systems, a novel incipient fault diagnosis method is presented in this article. The wavelet packet feature quantities, which consist of the energy, fluctuation coefficient, skewness, and margin factor, are obtained via multiscale time-frequency analysis with wavelet packet transform (WPT). Then, generalized discriminant analysis (GDA) is employed to realize the fusion of wavelet packet feature quantities because it can handle the data nonlinearity and eliminate redundant information. Furthermore, the generalized multiple kernel learning support vector machine (GMKL-SVM), which has the advantages of a strong generalization ability and high accuracy, is developed to identify the incipient fault classes of analog circuits. Moreover, a new particle swarm intelligent optimization algorithm, the sine cosine algorithm (SCA), is adopted to optimize key parameters of GMKL-SVM because of its high convergence speed and strong global optimization ability. The method is fully evaluated with the Sallen-Key bandpass filter circuit, the four-op-amp biquad high-pass filter circuit, and the leapfrog filter circuit. The experimental results demonstrate that the proposed incipient fault diagnosis method for analog circuits can produce higher diagnosis accuracy than other typical incipient fault diagnosis methods.
- Published
- 2021
6. DRUNet: A Method for Infrared Point Target Detection
- Author
-
Changan Wei, Qiqi Li, Ji Xu, Jingli Yang, and Shouda Jiang
- Subjects
Fluid Flow and Transfer Processes ,Process Chemistry and Technology ,General Engineering ,General Materials Science ,convolutional neural network ,IR small target detection ,object detection by keypoint estimation ,Instrumentation ,Computer Science Applications - Abstract
Deep learning is widely used in vision tasks, but feature extraction of IR small targets is difficult due to the inconspicuous contours and lack of color information. This paper proposes a new convolutional neural network–based (CNN-based) method for IR small target detection called DRUNet. The algorithm is divided into two parts: the feature extraction network and the prediction head. For the small IR targets, which are difficult to accurately label, Gaussian soft labels are added to supervise the training process and make the network converge faster. We use a simplified object keypoint similarity to evaluate the network accuracy by the ratio of the distance to the radius of the inner tangent circle of the target box and a fair method for evaluating the model inference speed after GPU preheating. The experimental results show that our proposed algorithm performs better when compared with commonly used algorithms in the field of small target detection. The model size is 10.5 M, and the test speed reaches 133 FPS under the RTX3090 experimental platform.
- Published
- 2022
7. Machine Learning based Combinatorial Test Cases Ordering Approach
- Author
-
Yunlong Sheng, Shouda Jiang, Chang-An Wei, and Yijiao Sun
- Subjects
Prioritization ,business.industry ,Computer science ,Code coverage ,Test method ,Machine learning ,computer.software_genre ,Test (assessment) ,Support vector machine ,Test case ,Combinatorial testing ,Artificial intelligence ,Detection rate ,business ,computer - Abstract
Combinatorial testing is an efficient test method, which can achieve high test coverage with as few test cases as possible. However, there are a large amount of test cases of combinatorial testing in industrial practice. If all the test cases are applied to executing, it takes a vast time and cost. How to select a subset of test cases which can guarantee failure detection rate is a common problem. In this paper, we introduce a novel technique for test case prioritization of combinatorial testing based on supervised machine learning. Our approach considers the test results of a small t-way covering array and the machine learning algorithm SVM is used to learn the test results first. Then SVM is used to predict a large t-way covering array. The test cases in the large t-way covering array are ordered according to the predicted results. The test cases which can lead failures in system under tests are ordered ahead. They own the priority. A subset of the ordered covering array which is selected from the start of the covering array can replace the whole covering array with time and cost saved reasonably. Our approach is evaluated by means of comparing the covering arrays which are ordered by SVM and the random ordering. Our results imply that our technique improves the failure detection rate significantly.
- Published
- 2021
8. A Dual-input Fault Diagnosis Model Based on Convolutional Neural Networks and Gated Recurrent Unit Networks for Analog Circuits
- Author
-
Shouda Jiang, Tianyu Gao, Cheng Yang, and Jingli Yang
- Subjects
Analogue electronics ,business.industry ,Frequency domain ,Feature extraction ,Medicine ,Hardware_PERFORMANCEANDRELIABILITY ,Time domain ,Double fault ,Fault (power engineering) ,business ,Algorithm ,Convolutional neural network ,Domain (software engineering) - Abstract
To improve the reliability and safety of complex electrical systems, an end-to-end fault diagnosis method for analog circuits is proposed in this paper. First, by combining the convolutional neural networks (CNN) and the gated recurrent unit (GRU) networks, a feature extraction model based on CNN-GRU is developed to obtain information that characterizes the essential states of the circuit under test (CUT) from the its signals. Compared with traditional feature extraction methods, the CNN-GRU model can obtain the spatial features of signals while retaining the time sequence features. Then, a dual-input structure of the time domain and frequency domain is designed for the CNN-GRU model, and the time-frequency domain fusion features of the signals are obtained by using the dual-input fault diagnosis model based on CNN-GRU, thereby fully reflecting the circuit states. The Sallen-Key bandpass filter circuit in ISCAS'97 circuit set is adopted to comprehensively evaluate the proposed method. Experimental results prove that the proposed fault diagnosis method can implement the accurate identification for incipient single fault classes and double fault classes.
- Published
- 2021
9. A Novel Fault Diagnosis Method for Planetary Gearboxes under Imbalanced Data
- Author
-
Shouda Jiang, Jingli Yang, and Tianyu Gao
- Subjects
Discriminator ,Computer science ,Feature extraction ,Function (mathematics) ,Fault (power engineering) ,computer.software_genre ,Imbalanced data ,symbols.namesake ,Nash equilibrium ,symbols ,Data mining ,computer ,Energy (signal processing) ,Generator (mathematics) - Abstract
To address the issue of fault diagnosis of planetary gearboxes under imbalanced data, a novel fault diagnosis method based on the improved energy-based generative adversarial network (IEBGAN) is proposed. Firstly, convolutional layers are added to the energy-based generative adversarial network (EBGAN) discriminator, thereby improving the feature extraction ability. Then, the classification loss is introduced into the loss function of EBGAN with the purpose of expanding the classification function of the discriminator. Finally, a planetary gearbox fault diagnosis model with sample generation capability is established to achieve the Nash equilibrium by the confrontation between the generator and the discriminator. Experimental results illustrate that the proposed method can improve the accuracy of fault diagnosis for planetary gearboxes even under imbalanced data.
- Published
- 2020
10. An infrared-small-target detection method in compressed sensing domain based on local segment contrast measure
- Author
-
Jingli Yang, Lianlei Lin, Zheng Cui, Shouda Jiang, Yanfeng Gu, and Jun-Bao Li
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,Filter (signal processing) ,Condensed Matter Physics ,01 natural sciences ,Measure (mathematics) ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Image (mathematics) ,010309 optics ,Compressed sensing ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Preprocessor ,Contrast (vision) ,Computer vision ,Artificial intelligence ,business ,Block size ,media_common - Abstract
Real-time performance is one of the key properties in infrared targets detection system which limits the applications of many algorithms. In this paper, a novel infrared-small-target detection method using compressed sensing technology is proposed to improve real-time performance by combining the images compressed and targets detection. Furthermore, dealing with images with both bright targets and dark targets, filter images for two kinds of targets separately is commonly for the image preprocessing. In this paper, a local segment contrast measure method is proposed to preprocess images uniform. Finally, the influence of certain vital parameters (e.g., the block size and the filter window size) on the detection and compression performance is discussed at length. Several guiding principles for the selection of those vital parameters are developed. The experimental results demonstrate that the proposed framework with an appropriate block size and filter window size provides a great balance between real-time performance and accuracy. The proposed local segment contrast measure method is efficient when applied to both bright and dark small targets.
- Published
- 2018
11. A Reduced Gaussian Kernel Least-Mean-Square Algorithm for Nonlinear Adaptive Signal Processing
- Author
-
Chao Sun, Yuqi Liu, and Shouda Jiang
- Subjects
0209 industrial biotechnology ,Computational complexity theory ,Computer science ,Applied Mathematics ,Hilbert space ,02 engineering and technology ,Adaptive filter ,symbols.namesake ,Nonlinear system ,020901 industrial engineering & automation ,Kernel (statistics) ,Signal Processing ,Taylor series ,symbols ,Gaussian function ,Weighted network ,Algorithm - Abstract
The purpose of kernel adaptive filtering (KAF) is to map input samples into reproducing kernel Hilbert spaces and use the stochastic gradient approximation to address learning problems. However, the growth of the weighted networks for KAF based on existing kernel functions leads to high computational complexity. This paper introduces a reduced Gaussian kernel that is a finite-order Taylor expansion of a decomposed Gaussian kernel. The corresponding reduced Gaussian kernel least-mean-square (RGKLMS) algorithm is derived. The proposed algorithm avoids the sustained growth of the weighted network in a nonstationary environment via an implicit feature map. To verify the performance of the proposed algorithm, extensive simulations are conducted based on scenarios involving time-series prediction and nonlinear channel equalization, thereby proving that the RGKLMS algorithm is a universal approximator under suitable conditions. The simulation results also demonstrate that the RGKLMS algorithm can exhibit a comparable steady-state mean-square-error performance with a much lower computational complexity compared with other algorithms.
- Published
- 2018
12. Robust spatio-temporal context for infrared target tracking
- Author
-
Yanfeng Gu, Jun-Bao Li, Jingli Yang, Zheng Cui, and Shouda Jiang
- Subjects
Warning system ,Computer science ,business.industry ,Temporal context ,Context (language use) ,02 engineering and technology ,Condensed Matter Physics ,Object (computer science) ,Tracking (particle physics) ,01 natural sciences ,Atomic and Molecular Physics, and Optics ,Field (computer science) ,Motion (physics) ,Electronic, Optical and Magnetic Materials ,010309 optics ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Eye tracking ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business - Abstract
Target tracking is one of the most important and active research areas in the field of computer vision. In this paper, we address the problem of tracking an object that is completely occluded in a video. The proposed robust spatio-temporal context (RSTC) method, inspired by the spatio-temporal context method, uses spatio-temporal context information to establish an early warning mechanism. The core technology of the early warning mechanism is to monitor the fluctuation of the relative change rate between two adjacent frames. When a target is completely occluded, the algorithm estimates the target’s location using accurate motion information saved during the early warning. Finally, the algorithm captures the target after the target reappears. Experimental results show that the proposed algorithm is very robust and efficient when used in visual tracking applications.
- Published
- 2018
13. Kernel Filtered-x LMS Algorithm for Active Noise Control System with Nonlinear Primary Path
- Author
-
Yuqi Liu, Chao Sun, and Shouda Jiang
- Subjects
0209 industrial biotechnology ,Signal processing ,Computer science ,Applied Mathematics ,020206 networking & telecommunications ,02 engineering and technology ,White noise ,Adaptive filter ,Least mean squares filter ,Nonlinear system ,020901 industrial engineering & automation ,Kernel method ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Reference noise ,Algorithm ,Active noise control - Abstract
In active noise control (ANC) systems, the primary path may exhibit nonlinear impulse responses. Conventional linear ANC controllers based on a filtered-x least mean square (FxLMS) algorithm exhibit performance degradation when compensating for nonlinear distortions of the primary path. Several nonlinear active noise control algorithms, including Volterra filtered-x least mean square (VFxLMS) and filtered-s least mean square (FsLMS), have been utilized to overcome this nonlinear effect. However, the performance still needs to be improved when the reference noise is mixed with multiple narrowband signals and additional Gaussian white noise. Over the last several years, kernel adaptive filters have exhibited powerful capabilities in multiple signal processing domains. When kernel adaptive filters are introduced into the ANC system, a great challenge is to compensate for the inherent delay caused by the secondary path. Due to the implicit mapping of the kernel method, it is difficult to filter the reference signal in the high-dimensional feature space. In this paper, an approximate method is proposed in which the filtered reference signal is mapped to the high-dimensional feature space. In addition, a kernel filtered-x least mean square (KFxLMS) algorithm is developed for an ANC system with a nonlinear primary path. Simulation experiments demonstrate that the performance of the proposed KFxLMS algorithm is better than that of the FxLMS, VFxLMS, and FsLMS algorithms.
- Published
- 2018
14. Active Noise Control Over Space: A Wave Domain Approach
- Author
-
Shouda Jiang, Wen Zhang, Thushara D. Abhayapala, Prasanga N. Samarasinghe, and Jihui Zhang
- Subjects
Signal processing ,Acoustics and Ultrasonics ,Computer science ,Noise reduction ,020206 networking & telecommunications ,02 engineering and technology ,Signal ,Secondary source ,Reduction (complexity) ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Computational Mathematics ,Control theory ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Noise control ,Loudspeaker ,Electrical and Electronic Engineering ,0305 other medical science ,Active noise control - Abstract
Noise control and cancellation over a spatial region is a fundamental problem in acoustic signal processing. In this paper, we utilize wave-domain adaptive algorithms to iteratively calculate the secondary source driving signals and to cancel the primary noise field over the control region. We propose wave-domain active noise control algorithms based on two minimization problems: first, minimizing the wave-domain residual signal coefficients, and second, minimizing the acoustic potential energy over the region, and derive the update equations with respect to two variables, the loudspeaker weights and wave-domain secondary source coefficients. Simulation results demonstrate the effectiveness of the proposed algorithms, more specifically the convergence speed and the noise cancellation performance in terms of the noise reduction level and acoustic potential energy reduction level over the entire spatial region.
- Published
- 2018
15. A dual-kernel spectral-spatial classification approach for hyperspectral images based on Mahalanobis distance metric learning
- Author
-
Lianlei Lin, Jun-Bao Li, Shouda Jiang, Li Li, Chao Sun, and Jingwei Yin
- Subjects
Information Systems and Management ,Computer science ,Posterior probability ,0211 other engineering and technologies ,02 engineering and technology ,Theoretical Computer Science ,Kernel (linear algebra) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,021101 geological & geomatics engineering ,Mahalanobis distance ,business.industry ,Hyperspectral imaging ,Pattern recognition ,Spectral bands ,Computer Science Applications ,Data set ,Support vector machine ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel (image processing) ,Control and Systems Engineering ,Computer Science::Computer Vision and Pattern Recognition ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) ,Software - Abstract
Hyperspectral images provide a precise representation of the earth’s surface, with abundant spectral and spatial features, but normal classification algorithms use only the information provided by the spectral features of each data point. In this paper, we propose a new approach to hyperspectral image classification based on Mahalanobis distance metric learning and kernel learning that considers both the features of the spectral bands and a spatial prior. This approach consists of two components. First, we obtain a primary labeled classification result and a posterior probability distribution for each pixel point using a Mahalanobis-kernel-based classifier. Second, instead of the original or extracted spectral features, we reconstruct the spatial relationship of the hyperspectral images using the posterior probability of every data point, smooth the boundaries, and revise suspicious points based on this piecewise information using a kernel-based multi-region segmentation method. In an experimental study, we adopt a support vector machine (SVM) classifier as the kernel classifier to obtain the posterior probabilities using dimensionally reduced data. The proposed method is compared with several other methods from various perspectives. Simulation experiments run on several real hyperspectral data sets are reported. The results show that the proposed method performs better than other comparable classification algorithms, especially in a condition-constrained environment.
- Published
- 2018
16. Data validation of multifunctional sensors using independent and related variables
- Author
-
Shouda Jiang, Lianlei Lin, Zhen Sun, Jingli Yang, and Yinsheng Chen
- Subjects
Computer science ,media_common.quotation_subject ,Data validation ,02 engineering and technology ,Iterative reconstruction ,computer.software_genre ,01 natural sciences ,Kernel principal component analysis ,Fault detection and isolation ,Data recovery ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Instrumentation ,Reliability (statistics) ,media_common ,Variables ,business.industry ,020208 electrical & electronic engineering ,010401 analytical chemistry ,Metals and Alloys ,Condensed Matter Physics ,0104 chemical sciences ,Surfaces, Coatings and Films ,Electronic, Optical and Magnetic Materials ,Data mining ,business ,Maximal information coefficient ,computer - Abstract
To enhance the reliability of multifunctional sensors, a novel data validation strategy is presented by handling independent and related variables separately. The maximal information coefficient (MIC), which can measure the strength of the correlation between two variables, is applied to divide all variables of multifunctional sensors into related and independent. For one thing, the k-nearest neighbor (kNN) rule is introduced to accomplish fault detection and isolation of independent variables, and the grey predictive model GM(1,1), which has the advantages of low computation burden and high accuracy, is adopted to achieve data recovery of faulty independent variables. For another, the kernel principal component analysis (KPCA), which can handle possible non-linearity of data, is employed to realize fault detection of related variables. An iterative reconstruction-based contribution (IRBC) method is developed to isolate all faulty related variables, and data recovery of them are implemented using a fuzzy similarity (FS)-based reconstruction method based on the spatial correlations among related variables. An experimental system for multifunctional sensors is built to evaluate the proposed strategy, and the performance comparisons with its counterparts are also conducted.
- Published
- 2017
17. A novel index structure to efficiently match events in large-scale publish/subscribe systems
- Author
-
Shouda Jiang, Fan Jing, Jingli Yang, and Chengyu Li
- Subjects
020203 distributed computing ,Matching (statistics) ,Theoretical computer science ,Computer Networks and Communications ,Computer science ,Event (computing) ,Process (computing) ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Data mining ,computer ,Blossom algorithm - Abstract
A novel index structure based on pairwise attribute subspaces is presented.An efficient event matching algorithm named PADEM (Pairwise Attribute Division based Event Matching) is developed based on the novel index structure.The performance metrics (e.g. matching time, insertion time, deletion time and memory consumption) of PADEM are theoretically analysed and compared with its counterparts. The event matching algorithm, which checks the events against all the subscriptions, is a fundamental component of large-scale content-based publish/subscribe systems, and it is the key issue for improving the efficiency of the entire system. To meet the increasing efficiency requirements of real-time publish/subscribe systems, an event matching algorithm named PADEM (Pairwise Attribute Division based Event Matching) is presented in this paper. By dividing the attribute space into multiple pairwise attribute subspaces, PADEM constructs a novel index structure to classify all subscriptions in systems. This index structure can guarantee the matching process in its each unit can only be triggered by corresponding events. The experimental results demonstrate that PADEM can dramatically improve the efficiency of event matching, particularly in large-scale distributed systems with high volumes of subscriptions.
- Published
- 2017
18. Multichannel active noise control for spatially sparse noise fields
- Author
-
Shouda Jiang, Prasanga N. Samarasinghe, Jihui Zhang, Wen Zhang, and Thushara D. Abhayapala
- Subjects
Acoustics and Ultrasonics ,Noise measurement ,Noise (signal processing) ,Computer science ,Acoustics ,Attenuation ,020206 networking & telecommunications ,02 engineering and technology ,Noise floor ,Gradient noise ,030507 speech-language pathology & audiology ,03 medical and health sciences ,symbols.namesake ,Noise ,Arts and Humanities (miscellaneous) ,Colors of noise ,Gaussian noise ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Image noise ,Value noise ,0305 other medical science ,Algorithm ,Noise (radio) ,Active noise control - Abstract
Multi-channel active noise control (ANC) is currently an attractive solution for the attenuation of low-frequency noise fields, in three-dimensional space. This paper develops a controller for the case when the noise source components are sparsely distributed in space. The anti-noise signals are designed as in conventional ANC to minimize the residual errors but with an additional term containing an ll norm regularization applied to the signal magnitude. This results in that only secondary sources close to the noise sources are required to be active for cancellation of sparse noise fields. Adaptive algorithms with low computational complexity and faster convergence speeds are proposed.
- Published
- 2016
19. A dual-layer supervised Mahalanobis kernel for the classification of hyperspectral images
- Author
-
Lianlei Lin, Shouda Jiang, Jun-Bao Li, Li Li, and Chao Sun
- Subjects
Mahalanobis distance ,business.industry ,Cognitive Neuroscience ,Supervised learning ,Hyperspectral imaging ,Dual layer ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Science Applications ,Support vector machine ,Matrix (mathematics) ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Hyperspectral image classification ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Classifier (UML) ,Mathematics - Abstract
To address the drawback of traditional Mahalanobis distance metric learning (DML) methods that learn the matrix without considering the weights of each class, in this paper, a novel dual-layer supervised Mahalanobis kernel is proposed for the classification of hyperspectral images. By modifying the traditional unsupervised Mahalanobis kernel, a supervised Mahalanobis matrix that can include more relativity information of different types of real materials in hyperspectral images is learned to obtain a new kernel. The proposed Mahalanobis matrix is obtained in two steps. In step one, we learn the first traditional Mahalanobis matrix with all samples to map the raw data. In step two, based on the data mapped by the first matrix, we pick several hard-to-identify classes from all the classes and learn the second Mahalanobis matrix using only these data. Finally, by combining these two matrices, we construct a new form of the Mahalanobis kernel. Simulation experiments are conducted on three real hyperspectral data sets. We use SVM as the kernel-based classifier to classify the dimensionally reduced data and compare with several methods from various aspects. The results show that the proposed methods perform better than other unsupervised or single-layer DML methods in classifying the hard-to-identify classes, especially under an extreme condition.
- Published
- 2016
20. Parallel search strategy in kernel feature space to track FLIR target
- Author
-
Jun-Bao Li, Shouda Jiang, Ping Fu, Chang'an Wei, and Zhen Shi
- Subjects
Computer science ,business.industry ,Cognitive Neuroscience ,Feature vector ,020206 networking & telecommunications ,Probability density function ,02 engineering and technology ,Sparse approximation ,Computer Science Applications ,Kernel (linear algebra) ,Kernel method ,Artificial Intelligence ,Kernel (statistics) ,Histogram ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business - Abstract
Robust tracking in the forward looking infrared (FLIR) sequences is still a challenging problem in the field of computer vision. Because images acquired by the infrared sensors are characterized by low signal-to-clutter ratios (SCR) and targets of interest may exhibit profound appearance variations due to ego-motion of the sensor platform and complex maneuvers. Though many efforts have been delivered, there are still some issues to be addressed. First, intensity features are not enough to deal with complex appearance variations for the challenging sequence. Second, to obtain satisfying estimation of target state, a plenty of particles have to be employed to approximate its probability density function (pdf). To deal with the two problems, a parallel search strategy based on kernel sparse representation (KL1PS tracker) is proposed to perform the tracking task in the FLIR sequences. With the ability of capturing the nonlinear features, kernel method is introduced to deal with complex appearance variations. After the kernel function is constructed based on the histogram features, both the target templates and candidates are mapped into the kernel feature space. Then efficient state particles are selected based on sparse representation which can be used to estimate the target state. The proposed method is tested on the AMCOM database, and the experimental results demonstrate its excellent performance in tracking accuracy compared with some state-of-the-art trackers.
- Published
- 2016
21. Particle Swarm Optimization Based Parallel Input Time Test Suite Construction
- Author
-
Chang-An Wei, Yunlong Sheng, and Shouda Jiang
- Subjects
Cover (telecommunications) ,Computer science ,Factor (programming language) ,0202 electrical engineering, electronic engineering, information engineering ,Test suite ,Particle swarm optimization ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,Time point ,Algorithm ,computer ,computer.programming_language - Abstract
Testing of Real-Time Embedded Systems (RTESs) under input timing constraints is a critical issue, especially for parallel input factors. Test suites which can cover more possibilities of input time and discover more defects under input timing constraints are worthy of study. In this paper, Parallel Input Time Test Suites (PITTSs) are proposed to improve the coverage of input time combinations. PITTSs not only cover all the neighbor input time point combinations of each factor, but also cover all the input time point combinations between each two factors of the same input. Particle swarm optimization based the PITTS construction algorithm is presented and benchmarks with different configurations are conducted to evaluate the algorithm’ performance. A real world RTES is tested with a PITTS as application and we have reason to believe that PITTSs are effective and efficient for testing RTESs under input timing constraints of parallel input factors.
- Published
- 2019
22. An infrared small target detection framework based on local contrast method
- Author
-
Jun-Bao Li, Zheng Cui, Shouda Jiang, and Jingli Yang
- Subjects
Infrared ,Computer science ,business.industry ,Applied Mathematics ,media_common.quotation_subject ,020206 networking & telecommunications ,02 engineering and technology ,Condensed Matter Physics ,Constant false alarm rate ,Image (mathematics) ,Modulation ,Human visual system model ,0202 electrical engineering, electronic engineering, information engineering ,Contrast (vision) ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Forward looking infrared ,business ,Instrumentation ,media_common ,Block (data storage) - Abstract
Small-target detection in infrared imagery used in forward looking infrared is an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate and speed. In this letter, a novel approach inspired by human visual system (HVS) is presented. First, block compressed sampling theory was used to compress the image and obtain the modulation map. Then, small abnormal regions in the modulation map were detected by high speed local contrast method and defined as candidate targets. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.
- Published
- 2016
23. A Mahalanobis metric learning-based polynomial kernel for classification of hyperspectral images
- Author
-
Li Li, Jun-Bao Li, Chao Sun, Shouda Jiang, and Lianlei Lin
- Subjects
Computer Science::Machine Learning ,Mahalanobis distance ,business.industry ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Kernel principal component analysis ,Statistics::Machine Learning ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel (image processing) ,Artificial Intelligence ,Polynomial kernel ,Kernel embedding of distributions ,Variable kernel density estimation ,Computer Science::Computer Vision and Pattern Recognition ,Radial basis function kernel ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Tree kernel ,business ,Software ,021101 geological & geomatics engineering ,Mathematics - Abstract
In this paper, to combine the advantage of both polynomial kernel and the Mahalanobis distance metric learning (DML) methods, we propose a Mahalanobis DML based polynomial kernel for the classification of hyperspectral images. To ensure the method is computing-saving, we adapt a fast iterative method to learn the Mahalanobis matrix. Simulation experiment is conducted on two real hyperspectral data sets. To evaluate the proposed method, we compare it with the traditional radial basis function (RBF) kernel, polynomial kernel and the RBF-based Mahalanobis kernel, the result shows the performance of the proposed method did improve the capability of the polynomial kernel and also perform better than the RBF-based Mahalanobis kernel.
- Published
- 2016
24. Hierarchical search strategy in particle filter framework to track infrared target
- Author
-
Jun-Bao Li, Chang'an Wei, Zhen Shi, Shouda Jiang, and Ping Fu
- Subjects
business.industry ,Computer science ,020206 networking & telecommunications ,Probability density function ,02 engineering and technology ,Sparse approximation ,Tracking (particle physics) ,Field (computer science) ,Missile ,Artificial Intelligence ,Salient ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,State (computer science) ,business ,Particle filter ,Software - Abstract
A target of interest may exhibit significant appearance variations because of its complex maneuvers, ego-motion of the camera platform, etc. Currently, target tracking in forward-looking infrared (FLIR) sequences is still a challenging problem in the field of computer vision. Although many efforts have been devoted, there are still some issues to be addressed. First, state particles generated by prior information cannot approximate the probability density function well when the target state changes obviously. Second, plenty of particles have to be employed to obtain satisfying estimation of target state which will cause heavy computational burden in turn. In this paper, a hierarchical search strategy (HS tracker) is proposed to track infrared target in the particle filter framework, and there are two observation models employed to locate the target robustly. In the first stage, a saliency map leads the redistributed state particles to cover the salient areas that can provide a rough prediction of the target areas. In the second stage, sparse representation is employed to search for a subset of true ones from all the target candidates; thus, only efficient state particles are used to estimate the target state. The proposed method is tested on numerous FLIR sequences from the US army aviation and missile command database, and experimental results demonstrate the excellent performance.
- Published
- 2016
25. Status Self-Validation of Sensor Arrays Using Gray Forecasting Model and Bootstrap Method
- Author
-
Shouda Jiang, Xu Yonghui, Qi Wang, Yinsheng Chen, Jingli Yang, and Liu Xiaodong
- Subjects
0209 industrial biotechnology ,Engineering ,business.industry ,Signal reconstruction ,020208 electrical & electronic engineering ,Real-time computing ,Data validation ,02 engineering and technology ,Fault detection and isolation ,020901 industrial engineering & automation ,Sensor array ,Experimental system ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,Redundancy (engineering) ,Measurement uncertainty ,Electrical and Electronic Engineering ,business ,Instrumentation ,Reliability (statistics) - Abstract
The reliability monitoring of sensor arrays is a challenging and critical issue that directly influences the performance of a measurement and control system. In this paper, a novel strategy based on gray forecasting model GM(1,1) coupled with the bootstrap method is proposed for status self-validation of sensor arrays. The proposed strategy focuses on fault detection, isolation, and recovery (FDIR), data validation and dynamic measurement uncertainty estimation of the sensor arrays. The FDIR scheme can effectively detect and isolate sensor abrupt faults and simultaneously accomplish fault recovery with high accuracy and good timeliness. Furthermore, the proposed FDIR scheme has the advantage of discriminating between fault-free signals with sudden changes and undoubted abrupt faults through the trust mechanism. The model GM(1,1) is updated continuously by a metabolism method to improve the adaptivity of the strategy for reliability monitoring. After signal reconstruction, the data validation and dynamic measurement uncertainty can be evaluated by the bootstrap method without any prior information about measurands. A real metal-oxide gas sensor array experimental system is designed to verify the excellent performance of the proposed strategy. The experimental results demonstrate that the proposed approach is capable of conducting the status self-validation of sensor arrays effectively and improving the reliability of sensor arrays in engineering applications.
- Published
- 2016
26. An infrared small target detection algorithm based on high-speed local contrast method
- Author
-
Jun-Bao Li, Zheng Cui, Shouda Jiang, and Jingli Yang
- Subjects
Learning classifier system ,Infrared imagery ,Infrared ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,Small target ,Condensed Matter Physics ,01 natural sciences ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Constant false alarm rate ,010309 optics ,Task (computing) ,0103 physical sciences ,Human visual system model ,0202 electrical engineering, electronic engineering, information engineering ,Contrast (vision) ,020201 artificial intelligence & image processing ,Algorithm ,media_common - Abstract
Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.
- Published
- 2016
27. Grey bootstrap method for data validation and dynamic uncertainty estimation of self-validating multifunctional sensors
- Author
-
Shouda Jiang, Qi Wang, Yinsheng Chen, Jingli Yang, and Kai Song
- Subjects
Chemical process ,Computer science ,Process Chemistry and Technology ,Data validation ,computer.software_genre ,Computer Science Applications ,Analytical Chemistry ,Experimental system ,Control system ,Measurement uncertainty ,Probability distribution ,Data mining ,Isolation (database systems) ,computer ,Spectroscopy ,Software ,Reliability (statistics) - Abstract
The accuracy and reliability of multifunctional sensor outputs directly influence the running state and performance of measurement and control systems in chemical processes. Given their importance, self-validating multifunctional sensors are presented to improve the reliability of measurements in operation. A novel strategy based on the grey bootstrap method (GBM) is proposed for the online data validation and dynamic uncertainty estimation of self-validating multifunctional sensors. The data validation algorithm and the working principle based on GBM are applied for multiple faults detection, isolation and recovery (FDIR). The proposed FDIR scheme can simultaneously isolate multiple faults of multifunctional sensors and accomplish failure recovery with high accuracy and good timeliness. Moreover, it has a good performance of discriminating between fault-free signals with sudden changes and undoubted faults. On account of the unknown probability distribution and small sample size, the traditional expression of uncertainty has limitation in dynamic measurements. As a data-driven method, the GBM can evaluate the measurement uncertainty from poor information without prior information about the probability distribution of measurand in real-time. The performance of the proposed strategy is verified by computer simulations and a real experimental system of chemical gas concentration monitoring. Through the comparison of different methods, the results show that the GBM has superiority for the data validation and dynamic uncertainty estimation of self-validating multifunctional sensors.
- Published
- 2015
28. A filtered-x weighted accumulated LMS algorithm: Stochastic analysis and simulations for narrowband active noise control system
- Author
-
Chao Sun, Yang Jingli, Shouda Jiang, and Zhong Bo
- Subjects
Adaptive algorithm ,Mean squared error ,Stochastic process ,Secondary source ,Least mean squares filter ,Narrowband ,Control and Systems Engineering ,Control theory ,Signal Processing ,Convergence (routing) ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Software ,Mathematics ,Active noise control - Abstract
In narrowband active noise control (NANC) systems, several modifications of the well-known filtered-x least-mean-square (FXLMS) algorithm have been proposed for improved operation. The filtered-x weighted accumulated LMS (FXWALMS) algorithm is a variant of the FXLMS algorithm obtained by introducing the momentum LMS (MLMS) algorithm into the conventional FXLMS algorithm. The version with momentum term of the adaptive algorithm is used in practical implementations aiming to increase convergence with moderate computational cost. In order to better understand the influence of the FXWALMS algorithm on the NANC system, statistical performance of such an algorithm is investigated in both the transient and steady-state behaviors. Different equations which describe the convergence behaviors of the mean and mean squared estimation errors for the discrete Fourier coefficients (DFCs) of the secondary source are derived and discussed in detail. Using related difference equations, the steady-state expressions for DFCs estimation mean square error and the residual noise mean square error are developed in closed forms. Moreover, a modified FXWALMS (MFXWALMS) algorithm is proposed to improve the overall performance of the system. Extensive simulations are conducted to prove the effectiveness of analytical results and the superior performance of the MFXWALMS algorithm in various scenarios.
- Published
- 2014
29. An Infrared Small Target Detection Method Based on Block Compressed Sensing
- Author
-
Zheng Cui, Jingli Yang, and Shouda Jiang
- Subjects
Infrared ,business.industry ,Computer science ,02 engineering and technology ,Small target ,021001 nanoscience & nanotechnology ,01 natural sciences ,010309 optics ,Compressed sensing ,0103 physical sciences ,Computer vision ,Artificial intelligence ,0210 nano-technology ,business ,Test sample ,Block (data storage) - Abstract
Aiming at improving the real-time performance of infrared weapon systems, a method based on block compressed sensing is proposed to detect infrared small target, which is also easy to be implemented with hardware. The proposed method can detect and locate small infrared targets by classifying compressed results of blocks. In addition, in order to solve the low detection accuracy caused by using uniform block, the proposed method uses overlapping blocks to reduce the maximum distance between the center of the test sample block and that of the target. Experiments show that the proposed method can effectively improve the detection accuracy of infrared small targets.
- Published
- 2017
30. A variable momentum factor filtered-x weighted accumulated LMS algorithm for narrowband active noise control systems
- Author
-
Yonghui Xu, Shouda Jiang, Chao Sun, and Zhong Bo
- Subjects
Recursive least squares filter ,Computational complexity theory ,Applied Mathematics ,Function (mathematics) ,Condensed Matter Physics ,Secondary source ,Least mean squares filter ,Narrowband ,Control theory ,Convergence (routing) ,Electrical and Electronic Engineering ,Instrumentation ,Mathematics ,Active noise control - Abstract
In this paper, a filtered-x weighted accumulated least mean square (FXWALMS) algorithm is proposed for a typical narrowband active noise control (NANC) system by introducing the momentum LMS (MLMS) algorithm into the conventional filtered-x LMS (FXLMS) algorithm. The algorithm uses the new cost function to derive the updating equation for the discrete Fourier coefficients (DFCs) of the secondary source. In this way, the proposed algorithm achieves fast convergence and tracking capabilities at the expense of degrading the steady-state performance. To remedy the problem of the poor steady-state performance for the FXWALMS algorithm, a scheme of using a variable momentum factor is proposed to improve the overall performance of the system. In addition, compared to the filtered-x recursive least squares (FXRLS) algorithm, the performance of the proposed algorithm is better in terms of the computational complexity and tracking capabilities. Computer simulations were conducted to demonstrate the superior performance of the proposed algorithm in both stationary and non-stationary scenarios.
- Published
- 2014
31. DOCO: An Efficient Event Matching Algorithm in Content-Based Publish/Subscribe Systems
- Author
-
Shouda Jiang, Fan Jing, and Jingli Yang
- Subjects
Structure (mathematical logic) ,Matching (statistics) ,business.industry ,Computer science ,Event (computing) ,Process (computing) ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,020202 computer hardware & architecture ,Synchronization (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,The Internet ,Data mining ,business ,Publication ,computer ,Blossom algorithm - Abstract
The content-based publish/subscribe systems are attracting more and more attention in Internet applications due to their intrinsic time, space, and synchronization decoupling properties. With the increase in system scale, the efficiency of event matching becomes more critical for system performance. However, most existing methods suffer significant performance degradation when the system has large volumes of subscriptions. This paper presents DOCO (DOuble COmbination event matching algorithm) to improve the efficiency of event matching in content-based publish/subscribe systems. Via assembling the attributes in the attribute space by pairs, a novel index structure is built up for classification of the subscriptions. On the arrival of an event, the event matching process is only carried out on some related units of the index structure, thus, the number of subscriptions that involved in the event matching process is reduced. A series of experiments are designed to verify the performance of the proposed algorithm, and a comparison with other event matching algorithms is also carried out. The experimental results show that DOCO can improve the efficiency of event matching in content-based publish/subscribe systems.
- Published
- 2016
32. A novel fault diagnostic method for analog circuits using frequency response features
- Author
-
Shouda Jiang, Cheng Yang, Tianyu Gao, and Jingli Yang
- Subjects
010302 applied physics ,Frequency response ,Fitness function ,Analogue electronics ,business.industry ,Computer science ,Pattern recognition ,Hardware_PERFORMANCEANDRELIABILITY ,Filter (signal processing) ,Fault (power engineering) ,01 natural sciences ,010305 fluids & plasmas ,Support vector machine ,0103 physical sciences ,Firefly algorithm ,Artificial intelligence ,business ,Instrumentation ,Digital biquad filter - Abstract
Analog circuits are an important component of complex electrical systems. Therefore, fault diagnosis of analog circuits plays a vital role in ensuring the reliability of electronic systems. A novel fault diagnostic method for analog circuits based on the support vector machine (SVM) optimized by the firefly algorithm (FA) using frequency response features is presented in this paper. Wilks Λ-statistic can effectively assess the ability of variables to resolve multiple types of samples in multivariate statistical analysis. Frequency responses of analog circuits are measured, and then, features are extracted by using the particle swarm optimization (PSO) method. Additionally, the fitness function of the PSO is set to Wilks Λ-statistic. Then, an SVM based analog circuit’s fault diagnosis model is introduced to classify the faulty components according to the extracted frequency response features. The optimal penalty parameter and kernel function parameter of SVM are obtained by using the FA. The method is fully evaluated in fault diagnosis simulations of the Sallen-Key bandpass filter and four-op-amp biquad high-pass filter. The experimental results demonstrate that the proposed fault diagnostic method can produce higher diagnosis accuracy than other typical analog circuit fault diagnosis methods.
- Published
- 2019
33. A Precise Link Loss Inference Algorithm with Minimal Cover Set
- Author
-
Shouda Jiang, Xu Yonghui, and Jingli Yang
- Subjects
Set (abstract data type) ,Inference ,Cover (algebra) ,Electrical and Electronic Engineering ,Link (knot theory) ,Algorithm ,Mathematics - Published
- 2013
34. Atrial activity extraction from single lead ECG recordings: Evaluation of two novel methods
- Author
-
Shouda Jiang, Ye Li, and Huhe Dai
- Subjects
Mean squared error ,Speech recognition ,Beat (acoustics) ,Health Informatics ,Probability density function ,Electrocardiography ,QRS complex ,medicine ,Humans ,Heart Atria ,cardiovascular diseases ,Mathematics ,medicine.diagnostic_test ,business.industry ,Models, Cardiovascular ,Subtraction ,Atrial fibrillation ,Pattern recognition ,Atrial Function ,medicine.disease ,Computer Science Applications ,cardiovascular system ,Artificial intelligence ,Likelihood function ,business ,Algorithms - Abstract
Two different methods for extracting atrial activity (AA) signal from single lead electrocardiogram (ECG) of atrial fibrillation were proposed. The first one is a weighted average beat subtraction (WABS) method. Coefficients of QRS complexes used for constructing QRS template were obtained by minimizing mean square error. The second method is based on maximum likelihood estimation (MLE). Probability density functions of AA signal and ventricular activity (VA) signals were estimated using generalized Gaussian model. Then AA signal was extracted by maximizing likelihood function. Simulated signal and clinical ECG were used to evaluate the performance of ABS, WABS and MLE-based algorithm. In comparison with ABS, WABS and MLE-based algorithm reduced normal mean square error by 23.5% and 20.2%, respectively.
- Published
- 2013
35. Fault detection, isolation, and diagnosis of status self-validating gas sensor arrays
- Author
-
Zhen Shi, Qi Wang, Yinsheng Chen, Jingli Yang, Xu Yonghui, and Shouda Jiang
- Subjects
Computer science ,business.industry ,020208 electrical & electronic engineering ,010401 analytical chemistry ,Pattern recognition ,02 engineering and technology ,Fault (power engineering) ,Residual ,Soft sensor ,01 natural sciences ,Signal ,Fault detection and isolation ,Hilbert–Huang transform ,0104 chemical sciences ,Sample entropy ,Sensor array ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,business ,Instrumentation - Abstract
The traditional gas sensor array has been viewed as a simple apparatus for information acquisition in chemosensory systems. Gas sensor arrays frequently undergo impairments in the form of sensor failures that cause significant deterioration of the performance of previously trained pattern recognition models. Reliability monitoring of gas sensor arrays is a challenging and critical issue in the chemosensory system. Because of its importance, we design and implement a status self-validating gas sensor array prototype to enhance the reliability of its measurements. A novel fault detection, isolation, and diagnosis (FDID) strategy is presented in this paper. The principal component analysis-based multivariate statistical process monitoring model can effectively perform fault detection by using the squared prediction error statistic and can locate the faulty sensor in the gas sensor array by using the variables contribution plot. The signal features of gas sensor arrays for different fault modes are extracted by using ensemble empirical mode decomposition (EEMD) coupled with sample entropy (SampEn). The EEMD is applied to adaptively decompose the original gas sensor signals into a finite number of intrinsic mode functions (IMFs) and a residual. The SampEn values of each IMF and the residual are calculated to reveal the multi-scale intrinsic characteristics of the faulty sensor signals. Sparse representation-based classification is introduced to identify the sensor fault type for the purpose of diagnosing deterioration in the gas sensor array. The performance of the proposed strategy is compared with other different diagnostic approaches, and it is fully evaluated in a real status self-validating gas sensor array experimental system. The experimental results demonstrate that the proposed strategy provides an excellent solution to the FDID of status self-validating gas sensor arrays.
- Published
- 2016
36. Sparse complex FxLMS for active noise cancellation over spatial regions
- Author
-
Shouda Jiang, Thushara D. Abhayapala, Jihui Zhangg, Wen Zhang, and Prasanga N. Samarasinghe
- Subjects
Noise temperature ,Microphone array ,Adaptive algorithm ,Noise measurement ,Noise (signal processing) ,Computer science ,Acoustics ,Speech recognition ,020206 networking & telecommunications ,02 engineering and technology ,Noise floor ,030507 speech-language pathology & audiology ,03 medical and health sciences ,symbols.namesake ,Noise ,Gaussian noise ,Noise-canceling microphone ,Phase noise ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Loudspeaker ,0305 other medical science ,Active noise control - Abstract
In this paper, we investigate active noise control over large 2D spatial regions when the noise source is sparsely distributed. The l1 relaxation technique originated from compressive sensing is adopted and based on that we develop the algorithm for two cases: multipoint noise cancellation and wave domain noise cancellation. This results in two new variants (i) zero-attracting multi-point complex FxLMS and (ii) zero-attracting wave domain complex FxLMS. Both approaches use a feedback control system, where a microphone array is distributed over the boundary of the control region to measure the residual noise signals and a loudspeaker array is placed outside the microphone array to generate the anti-noise signals. Simulation results demonstrate the performance and advantages of the proposed methods in terms of convergence rate and spatial noise reduction levels.
- Published
- 2016
37. Computation of Large-Scale Electric Field in Free Space
- Author
-
Qiao-Chu Cui, Chang-An Wei, Lian-Lei Lin, and Shouda Jiang
- Subjects
010302 applied physics ,Physics ,Field (physics) ,Scale (ratio) ,business.industry ,Computation ,Optical field ,Electric flux ,01 natural sciences ,Electric field ,0103 physical sciences ,Electric potential ,Aerospace engineering ,business ,Electric displacement field - Abstract
A method of modeling Spatial Electric Field around the earth is presented to provide the spatial electric field data in virtual test system. Firstly, the electric field in different time in the condition of sunny day is modeled to generate the electric field data in a period time and spatial. Then, this paper gives the analysis of the influence to Spatial Electric Field of thunderstorm.
- Published
- 2015
38. Data validation and dynamic uncertainty estimation of self-validating sensor
- Author
-
Yinsheng Chen, Shouda Jiang, and Jingli Yang
- Subjects
Engineering ,business.industry ,Process (computing) ,Data validation ,Probability density function ,Fault (power engineering) ,computer.software_genre ,Expression (mathematics) ,Measurement uncertainty ,Sensitivity analysis ,Isolation (database systems) ,Data mining ,business ,computer - Abstract
A novel self-validating strategy using grey bootstrap method (GBM) is proposed for data validation and dynamic uncertainty estimation of self-validating sensor. The failure detection, isolation, and recovery (FDIR) of self-validating sensor based on GM(1,1) predictor can simultaneously detect and isolate fault and accomplish failure recovery with high accuracy and good timeliness. Furthermore, the proposed FDIR scheme has good effectiveness of discriminating between fault-free signals with sudden changes and undoubted faults. In dynamic measurement process, because of unknown prior information about probability density functions (PDFs) of uncertainty sources, the uncertainty cannot be estimated by Guide to the Expression of Uncertainty in Measurement (GUM). The GBM can evaluate the measurement uncertainty by poor information and small sample. Experiment results show that the GBM strategy provides a good solution to data validation and dynamic uncertainty estimation of self-validating sensor.
- Published
- 2015
39. Constraint Test Cases Generation Based on Particle Swarm Optimization
- Author
-
Changan Wei, Shouda Jiang, and Yunlong Sheng
- Subjects
Mathematical optimization ,General Computer Science ,Process (engineering) ,Energy Engineering and Power Technology ,Aerospace Engineering ,Particle swarm optimization ,020207 software engineering ,Combinatorial interaction testing ,02 engineering and technology ,Industrial and Manufacturing Engineering ,Test case ,Nuclear Energy and Engineering ,Software testing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,Greedy algorithm ,Constraint (mathematics) ,Algorithm ,Selection (genetic algorithm) ,Mathematics - Abstract
The testing of configurations with constraints still faces a great challenge. Although artificial intelligence (AI)-based algorithms perform better than greedy algorithms on [Formula: see text]-way testing because of the good searching ability of optimal solutions, only a few AI-based algorithms can support constraints currently. Moreover, the AI-based algorithms can only ignore the conflicting candidate test cases subject to constraints, even though they are optimal. In this paper, we demonstrate two novel particle swarm optimization (PSO)-based constraint test cases generation (PCTG) methods. In the two methods, the strategies of avoiding the selection of conflicting test cases and replacing conflicting test cases are applied to handle constraints, respectively. They guide the process of searching for optimal solutions from different perspectives, according to different handling of constraints. We evaluate the availability of these two methods with some excellent existing strategies in terms of performance. The evaluation results indicate that our proposed methods, in most cases, outperform other strategies as far as the generated constraints covering array sizes.
- Published
- 2017
40. Research of Weapon Equipment Integrated Test Platform
- Author
-
Shouda Jiang and Yangrui Xiang
- Subjects
Engineering ,business.industry ,Interoperability ,computer.software_genre ,Test harness ,Constructive ,Encapsulation (networking) ,Test platform ,Systems engineering ,Operating system ,Test Management Approach ,Test plan ,Architecture ,business ,computer - Abstract
Integrated test means combining different stage and style tests ranging from test resourses, test technologies to test messages. Integrated test platform is to construct a distributed test enviorment which could make research departments, universities and military departments, such as ranges, work cooperatively by developing integrated test conceptual models according the overall integrated test plan and architecture which is enable interoperability among live, virtual and constructive assets in a quick and cost-efficient manner. For better integration of test resources, we explore some key technology such as test equipment encapsulation, time-space consistency, instrumentalization and wireless data transmission.
- Published
- 2012
41. Research on Collaborative Environment of the Range Integrated Test and Training
- Author
-
Shouda Jiang and Zhonghua Liu
- Subjects
Knowledge management ,business.industry ,Computer science ,computer.software_genre ,Training (civil) ,Test (assessment) ,Model integration ,Engineering management ,Range (mathematics) ,Resource (project management) ,Architecture ,business ,computer ,Data integration - Abstract
Test and training are two important links to form battle effectiveness, and integration of them could offer test chance for test people as well as training environment for operater, saving the resource. Collabrative environment is a way to share resource, imported in the range integrated test and training. After the architecture of collaborative environment constructed, the collaborative framework, which consists of distributed data, distributed tools, distributed processes and distributed people, is advanced. And then an idea of implementing collaborative environment is put forward, that is to attempt to resolve problems from three aspectsdata integration, model integration and processes integration. At last the feasibility analysis of weapon equipment development is presented to demonstrate analysis of collabrative environment.
- Published
- 2012
42. Research of dynamics of DC speed regulator of flywheel double closed loop based on immune principle
- Author
-
Lifang Xu and Shouda Jiang
- Subjects
Flywheel energy storage ,Artificial immune system ,Control theory ,Computer science ,PID controller ,Rotational speed ,DC motor ,Flywheel ,Machine control - Abstract
In this paper, a DC speed regulator of double closed loop of rotational speed and current is designed for permanent-magnet brushless DC motor of flywheel energy storage system. Varela immune controller is modeled by the Simulink in Matlab and applied in double closed-loops DC motor speed regulator of flywheel. Simulation results show that Varela immune controller can have better control effect than PID on the respect of dynamics response and has good robust performance.
- Published
- 2011
43. The Hybrid Algorithm of Biogeography Based Optimization and Clone Selection for Sensors Selection of Aircraft
- Author
-
Lifang Xu, Shouda Jiang, and Hongwei Mo
- Subjects
Mathematical optimization ,Optimization problem ,Computer science ,Clone (algebra) ,Selection strategy ,Comparison results ,Nature inspired ,Hybrid algorithm ,Biogeography-based optimization ,Selection (genetic algorithm) - Abstract
Biogeography-based optimization algorithm(BBO) is a new kind of optimization algorithm based on Biogeography. It mimics the migration strategy of animals to solve the problem of optimization. In this paper, the clone selection strategy is combined with biogeography for solving the problem of sensors selection of aircraft. It is compared with other classical nature inspired algorithms. The comparison results show that BBOCSA is an effective algorithm for optimization problem in practice. It provides a new method for this kind of problem.
- Published
- 2011
44. Research of Intelligence Control for Flying Altitude of Four Rotors Flyer
- Author
-
Lifang Xu and Shouda Jiang
- Subjects
Attitude control ,Nonlinear system ,Artificial immune system ,Computer science ,Control system ,Genetic algorithm ,Mobile robot ,Radial basis function ,Fuzzy control system ,Simulation - Abstract
Four rotors flyer(FRF) is difficult to control because its control system is a multi-inputs and multi outputs with characteristics of nonlinear, strong coupling and sensitive to disturbance. In this paper, several intelligence control methods including genetic algorithm(GA), immune algorithm(IA), radial basis function(RBF), fuzzy system(FS) are used to control the flyer and the performance is analyzed respectively. The results show that RBF has the best performance in all the tests of pitch and yaw control of FRF.
- Published
- 2010
45. Automatic Target Detection and Tracking in FLIR Image Sequences Using Morphological Connected Operator
- Author
-
Chang’an Wei and Shouda Jiang
- Subjects
business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wavelet transform ,Pattern recognition ,Grayscale ,Thresholding ,Haar wavelet ,Object detection ,Wavelet ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Artificial intelligence ,Mean-shift ,Forward looking infrared ,business ,Mathematics - Abstract
In this paper, we propose a method for detecting and tracking small targets in forward looking infrared (FLIR) image sequences taken from an airborne moving platform. Firstly, we adopt the morphological connected operator to remove the undesirable clutter in the background. Secondly, the image is decomposed by morphological Haar wavelet, and the wavelet energy image is computed from the horizontal and vertical detail images, and it is fused with the scaled image. Thirdly, the targets are extracted coarse-to-fine by adaptive double thresholding. Finally, targets are modeled by intensity probabilistic density function and tracked using mean shift algorithm. The experiments performed on the AMCOM FLIR data set verify the validity and robustness of the algorithm.
- Published
- 2008
46. A Fast Training Algorithm for Least Squares SVM
- Author
-
Shouda Jiang, Lianlei Lin, and Chao Sun
- Subjects
Structured support vector machine ,business.industry ,Population-based incremental learning ,Stability (learning theory) ,Online machine learning ,Pattern recognition ,Least squares ,Relevance vector machine ,Support vector machine ,Least squares support vector machine ,Artificial intelligence ,business ,Algorithm ,Mathematics - Abstract
A fast training algorithm for Least Squares SVM (LS-SVM) classifiers was proposed, which is based on incremental and decremental learning theory. When a SV (Support Vector) is added or removed, computation based on previous training result replaces large-scale matrix inverse, thus the computation cost is reduced. The innovation is that by reasonable use of incremental and decremental learning the proposed algorithm can adaptively adjust the size of training sets (number of SVs) according to the specific classification problem. Finally several experiments show the validity of proposed algorithm.
- Published
- 2007
47. A Simple Method of Computing Signal Order Tracking Fast in Time Domain
- Author
-
Shouda Jiang and Yang Liu
- Subjects
Vibration ,Transient vibration ,Simple (abstract algebra) ,Computer science ,Real-time computing ,Effective method ,Time domain ,Order tracking ,Algorithm ,Signal ,Dynamic testing - Abstract
Order tracking spectrum analysis is an effective method used in the research of transient vibration signal. Through careful study of its principles and applications in vibration test, a new simple formula of calculating the signal order tracking fast was brought forward. With the means mentioned above, a coarse order tracking of vibration signal could be derived easily in time domain. In the mean time, some principles of how to select the parameters in order tracking spectrum analysis were discussed with the help of the new method. Finally, the experiments showed its feasibility and validity in application of real-time processing and analysis of vibration signals.
- Published
- 2007
48. Use Support Vector Machine to Evaluate the Operational Effectiveness of Radar Jammer
- Author
-
Lianlei Lin and Shouda Jiang
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Jamming ,computer.software_genre ,law.invention ,Relevance vector machine ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,law ,Evaluation methods ,Radial basis function ,Data mining ,Operational effectiveness ,Radar ,Classifier (UML) ,computer - Abstract
To the operational effectiveness evaluation of radar jammer, a general evaluation idea was given. The evaluation problem was converted to a multi-class classification problem of support vector machine (SVM), and the evaluation method was proposed based on C-SVM classifier. RBF (radial basis function) was selected as kernel and cross-validation techniques were used for parameter optimization. Finally, a simulation experiment was implemented, and the results show that this method is simple and effective.
- Published
- 2007
49. Design of Universal Simulation Platform for Avionics
- Author
-
Shouda Jiang and Lianlei Lin
- Subjects
Computer science ,Aerospace simulation ,business.industry ,Embedded system ,Avionics ,computer.software_genre ,business ,Integrated modular avionics ,computer ,Simulation software ,Universality (dynamical systems) - Abstract
For lack of universality and flexibility in current avionics simulator, a universal semi-physical simulation platform was designed. The innovation is that a simulation database used to describe the behavior, attribute and interface information of avionics was proposed, and simulation is realized by softwares analysis to the database filled with the information. Because the object to be simulated is separated from simulation software, the universality and flexibility of the platform are enhanced greatly. Now this simulation platform has been successfully used in the union simulation experimentation of one airplanes avionics system.
- Published
- 2006
50. Energy allocation algorithm for index transmission of codeword using modified tabu search approach with simulated annealing
- Author
-
Shouda Jiang, Zhaoqing Liu, Qingming Ge, and Zhe-Ming Lu
- Subjects
Mathematical optimization ,Distortion ,Quantization (signal processing) ,Simulated annealing ,Bit error rate ,Vector quantization ,Guided Local Search ,Data_CODINGANDINFORMATIONTHEORY ,Algorithm ,Hill climbing ,Tabu search ,Mathematics - Abstract
Codeword index assignment (CIA) is a key issue to vector quantization (VQ). The application of tabu search algorithm has made some achievement in solving the codeword index assignment. Combined with the notion of tabu search the energy allocation scheme has been also successfully used to overcome channel error sensitivity . In this paper, a new algorithm called modified tabu energy allocation algorithm (MTEAA) is applied to index transmission of codeword for the purpose ofminimizing the extra distortion due to bit errors. Simulated annealing (SA) technique and a new parameter are introduced in the iteration of the tabu energy allocation approach (TEAA) to improve the convergence performance. Experimental results demonstrate that the proposed algorithm is superior to TEAA and the conventional energy allocation algorithm (CEAA) by evaluating the performance of channel distortion.
- Published
- 2002
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.