27,554 results on '"Subspace topology"'
Search Results
52. Multiview Consensus Structure Discovery
- Author
-
Jun Yu, Jigang Wu, Min Meng, and Mengcheng Lan
- Subjects
Structure (mathematical logic) ,Consensus ,Computer science ,business.industry ,Feature vector ,Machine learning ,computer.software_genre ,Linear subspace ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Cluster Analysis ,Learning ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Representation (mathematics) ,computer ,Algorithms ,Software ,Subspace topology ,Eigendecomposition of a matrix ,Information Systems - Abstract
Multiview subspace learning has attracted much attention due to the efficacy of exploring the information on multiview features. Most existing methods perform data reconstruction on the original feature space and thus are vulnerable to noisy data. In this article, we propose a novel multiview subspace learning method, called multiview consensus structure discovery (MvCSD). Specifically, we learn the low-dimensional subspaces corresponding to different views and simultaneously pursue the structure consensus over subspace clustering for multiple views. In such a way, latent subspaces from different views regularize each other toward a common consensus that reveals the underlying cluster structure. Compared to existing methods, MvCSD leverages the consensus structure derived from the subspaces of diverse views to better exploit the intrinsic complementary information that well reflects the essence of data. Accordingly, the proposed MvCSD is capable of producing a more robust and accurate representation structure which is crucial for multiview subspace learning. The proposed method can be optimized effectively, with theoretical convergence guarantee, by alternatively iterating the argument Lagrangian multiplier algorithm and the eigendecomposition. Extensive experiments on diverse datasets demonstrate the advantages of our method over the state-of-the-art methods.
- Published
- 2022
53. Data-Driven Discovery of Block-Oriented Nonlinear Models Using Sparse Null-Subspace Methods
- Author
-
Ye Yuan, Hai-Tao Zhang, Guanrong Chen, Xiuting Li, and Junlin Li
- Subjects
Nonlinear system identification ,Computer science ,Augmented Lagrangian method ,Null (mathematics) ,MathematicsofComputing_NUMERICALANALYSIS ,Function (mathematics) ,Regularization (mathematics) ,Computer Science Applications ,Human-Computer Interaction ,Nonlinear system ,Nonlinear Dynamics ,Control and Systems Engineering ,Electrical and Electronic Engineering ,Algorithm ,Algorithms ,Software ,Subspace topology ,Information Systems - Abstract
This article develops an identification algorithm for nonlinear systems. Specifically, the nonlinear system identification problem is formulated as a sparse recovery problem of a homogeneous variant searching for the sparsest vector in the null subspace. An augmented Lagrangian function is utilized to relax the nonconvex optimization. Thereafter, an algorithm based on the alternating direction method and a regularization technique is proposed to solve the sparse recovery problem. The convergence of the proposed algorithm can be guaranteed through theoretical analysis. Moreover, by the proposed sparse identification method, redundant terms in nonlinear functional forms are removed and the computational efficiency is thus substantially enhanced. Numerical simulations are presented to verify the effectiveness and superiority of the present algorithm.
- Published
- 2022
54. Multi-UAV Cooperative Localization for Marine Targets Based on Weighted Subspace Fitting in SAGIN Environment
- Author
-
Xianpeng Wang, Laurence T. Yang, Kaoru Ota, Mianxiong Dong, Huafei Wang, and Dandan Meng
- Subjects
Computational complexity theory ,Computer Networks and Communications ,Computer science ,MIMO ,Real-time computing ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,020206 networking & telecommunications ,02 engineering and technology ,Sparse approximation ,Computer Science Applications ,Matrix (mathematics) ,Data model ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Subspace topology ,Information Systems ,Sparse matrix ,Block (data storage) - Abstract
As an indispensable part of Internet of Vehicles (IoV), unmanned aerial vehicles (UAVs) can be deployed for target positioning and navigation in space-air-ground integrated networks (SAGIN) environment. Maritime target positioning is very important for the safe navigation of ships, hydrographic surveys, and marine resource exploration. Traditional methods typically exploit satellites to locate marine targets in SAGIN environment, and the location accuracy does not satisfy the requirements of modern ocean observation missions. In order to localize marine target, we develop a system architecture in this paper, which contains UAVs integrated with monostatic multiple-input multiple-output (MIMO) radars. The main thrust is to estimate direction-of-arrival (DOA) via MIMO radar. Herein, we consider a general scenario that unknown mutual coupling exist, and a novel sparse reconstruction algorithm is proposed. The mutual coupling matrix (MCM) is adopted with the help of its special structure, we formulate the data model as a sparse representation form. Then two novel matrices, a weighted matrix and a reduced-dimensional matrix are constructed to reduce the computational complexity and enhance the sparsity, respectively. Thereafter, a sparse constraint model is constructed using the concept of optimal weighted subspace fitting (WSF). Finally, DOA estimation of maritime targets can be achieved by reconstructing the support of a block sparse matrix. Based on the DOA estimation results, multiple UAVs are used to cross-locate marine targets multiple times, and an accurate marine target position is achieved in SAGIN environment. Numerical results are carried out, which demonstrates the effectiveness of the proposed DOA estimator, and the multi-UAV cooperative localization system can realize accurate target localization.
- Published
- 2022
55. Visualization and Analysis of Single Cell RNA-Seq Data by Maximizing Correntropy Based Non-Negative Low Rank Representation
- Author
-
Junliang Shang, Cui-Na Jiao, Juan-Wang Wang, Chun-Hou Zheng, and Jin-Xing Liu
- Subjects
Optimization problem ,business.industry ,Iterative method ,Computer science ,Gene Expression Profiling ,Rank (computer programming) ,Pattern recognition ,Computer Science Applications ,Visualization ,Health Information Management ,Robustness (computer science) ,Outlier ,Cluster Analysis ,Humans ,RNA-Seq ,Artificial intelligence ,Single-Cell Analysis ,Electrical and Electronic Engineering ,business ,Representation (mathematics) ,Algorithms ,Subspace topology ,Biotechnology - Abstract
The exploration of single cell RNA-sequencing (scRNA-seq) technology generates a new perspective to analyze biological problems. One of the major applications of scRNA-seq data is to discover subtypes of cells by cell clustering. Nevertheless, it is challengeable for traditional methods to handle scRNA-seq data with high level of technical noise and notorious dropouts. To better analyze single cell data, a novel scRNA-seq data analysis model called Maximum correntropy criterion based Non-negative and Low Rank Representation (MccNLRR) is introduced. Specifically, the maximum correntropy criterion, as an effective loss function, is more robust to the high noise and large outliers existed in the data. Moreover, the low rank representation is proven to be a powerful tool for capturing the global and local structures of data. Therefore, some important information, such as the similarity of cells in the subspace, is also extracted by it. Then, an iterative algorithm on the basis of the half-quadratic optimization and alternating direction method is developed to settle the complex optimization problem. Before the experiment, we also analyze the convergence and robustness of MccNLRR. At last, the results of cell clustering, visualization analysis, and gene markers selection on scRNA-seq data reveal that MccNLRR method can distinguish cell subtypes accurately and robustly.
- Published
- 2022
56. Multi-Weight Domain Adversarial Network for Partial-Set Transfer Diagnosis
- Author
-
Jinyang Jiao, Jing Lin, and Ming Zhao
- Subjects
Generalization ,business.industry ,Computer science ,Machine learning ,computer.software_genre ,Fault (power engineering) ,Domain (software engineering) ,Set (abstract data type) ,Identification (information) ,Discriminative model ,Control and Systems Engineering ,Outlier ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,Subspace topology - Abstract
To realize fault identification of unlabeled data and improve model generalization capability, domain adaptation has been increasingly applied to intelligent fault diagnosis of machinery. Nevertheless, traditional domain adaptation diagnosis models generally constrain different domains to have the same label space, which is not always hold in complex industrial scenarios. Consequently, a more practical scenario, i.e. partial-set transfer diagnosis (PSTD), is explored in this work, where the target label space is a subspace of source domain. A multi-weight domain adversarial network (MWDAN) is proposed to solve this issue, in which class-level and instance-level weighted mechanisms are jointly designed to quantify the transferability and importance of data example. Based on the proposed strategy, the positive transfer between shared classes is promoted while the negative effect caused by outlier classes is circumvented. As a result, MWDAN can learn discriminative representations for accurate fault diagnosis in target domain. Extensive experiments constructed on two mechanical systems demonstrate the outstanding performance of MWDAN.
- Published
- 2022
57. Manifold-Inspired Search-Based Algorithm for Automated Test Case Generation
- Author
-
Yang Zhongming, Stuart Dereck Semujju, Zhifeng Hao, Fangqing Liu, Junpeng Su, and Han Huang
- Subjects
Optimization problem ,Computer science ,Function (mathematics) ,Linear subspace ,Manifold ,Computer Science Applications ,Human-Computer Interaction ,Distribution (mathematics) ,Test case ,Path (graph theory) ,Computer Science (miscellaneous) ,Algorithm ,Subspace topology ,Information Systems - Abstract
Automated test case generation based on path coverage (ATCG-PC) is a black-box optimization problem whose difficulty is attributed to the one-to-many relationship between path and test cases. It results in a large number of redundant function evaluations in the search process of algorithms for ATCG-PC. To minimize the redundant function evaluations for solving ATCG-PC, the equivalent mapping subspaces are defined to decompose the search space according to the paths. Inspired by the data distribution hypothesis, we assume that the target path can be covered by searching in one neighborhood of a test case (equivalent mapping subspace) instead of the whole search space. This paper presents a manifold-inspired search-based algorithm that finds the equivalent mapping subspaces with the test-case-path relationship matrix. Furthermore, the algorithm generates test cases covering all possible paths by searching in the found subspaces. The experimental results show the proposed algorithm has significantly lower function evaluation consumption than the state-of-the-art algorithms with the highest path coverage rate in two open-source toolkits and 16 open-source real-world programs. Several orders of magnitude of function evaluations can be saved by searching in the found equivalent mapping subspaces instead of exploring the whole search space.
- Published
- 2022
58. Heterogeneous Graph Attention Network for Unsupervised Multiple-Target Domain Adaptation
- Author
-
Dacheng Tao, Tongliang Liu, Cheng Deng, and Xu Yang
- Subjects
Domain adaptation ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Machine Learning ,Artificial Intelligence ,Attention network ,0202 electrical engineering, electronic engineering, information engineering ,business.industry ,Applied Mathematics ,Multiple target ,Graph ,Semantics ,ComputingMethodologies_PATTERNRECOGNITION ,Computational Theory and Mathematics ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Pairwise comparison ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithms ,Software ,Subspace topology - Abstract
Domain adaptation, which transfers the knowledge from label-rich source domain to unlabeled target domains, is a challenging task in machine learning. The prior domain adaptation methods focus on pairwise adaptation assumption with a single source and a single target domain, while little work concerns the scenario of one source domain and multiple target domains. Applying pairwise adaptation methods to this setting may be suboptimal, as they fail to consider the semantic association among multiple target domains. In this work we propose a deep semantic information propagation approach in the novel context of multiple unlabeled target domains and one labeled source domain. Our model aims to learn a unified subspace common for all domains with a heterogeneous graph attention network, where the transductive ability of the graph attention network can conduct semantic propagation of the related samples among multiple domains. In particular, the attention mechanism is applied to optimize the relationships of multiple domain samples for better semantic transfer. Then, the pseudo labels of the target domains predicted by the graph attention network are utilized to learn domain-invariant representations by aligning labeled source centroid and pseudo-labeled target centroid. We test our approach on four challenging public datasets, and it outperforms several popular domain adaptation methods.
- Published
- 2022
59. Quantifying the Alignment of Graph and Features in Deep Learning
- Author
-
Pietro Panzarasa, Tom Rieu, Yifan Qian, Paul Expert, Mauricio Barahona, and Engineering & Physical Science Research Council (EPSRC)
- Subjects
FOS: Computer and information sciences ,principal angles ,Computer Science - Machine Learning ,Technology ,Data alignment ,Computer science ,cs.LG ,02 engineering and technology ,Computer Science, Artificial Intelligence ,Machine Learning (cs.LG) ,Engineering ,Statistics - Machine Learning ,Chordal graph ,0202 electrical engineering, electronic engineering, information engineering ,Artificial Intelligence & Image Processing ,physics.soc-ph ,Computer Science - Neural and Evolutionary Computing ,Computer Science - Social and Information Networks ,SCIENCE ,stat.ML ,Linear subspace ,Graph ,Computer Science Applications ,Task analysis ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,graph subspaces ,cs.SI ,Subspace topology ,Physics - Physics and Society ,Computer Networks and Communications ,Matrix norm ,FOS: Physical sciences ,Machine Learning (stat.ML) ,Physics and Society (physics.soc-ph) ,Measure (mathematics) ,Symmetric matrices ,Deep Learning ,Computer Science, Theory & Methods ,Artificial Intelligence ,Training ,Neural and Evolutionary Computing (cs.NE) ,cs.NE ,Computer Science, Hardware & Architecture ,Social and Information Networks (cs.SI) ,Science & Technology ,Nonhomogeneous media ,Learning systems ,business.industry ,Deep learning ,Engineering, Electrical & Electronic ,Pattern recognition ,Convolution ,graph convolutional networks (GCNs) ,Computer Science ,Neural Networks, Computer ,Artificial intelligence ,business ,Software - Abstract
We show that the classification performance of graph convolutional networks (GCNs) is related to the alignment between features, graph, and ground truth, which we quantify using a subspace alignment measure (SAM) corresponding to the Frobenius norm of the matrix of pairwise chordal distances between three subspaces associated with features, graph, and ground truth. The proposed measure is based on the principal angles between subspaces and has both spectral and geometrical interpretations. We showcase the relationship between the SAM and the classification performance through the study of limiting cases of GCNs and systematic randomizations of both features and graph structure applied to a constructive example and several examples of citation networks of different origins. The analysis also reveals the relative importance of the graph and features for classification purposes., Comment: Published in IEEE Transactions on Neural Networks and Learning Systems; Date of Publication: 11 January 2021
- Published
- 2022
60. An Online Compensation Method of VSI Nonlinearity for Dual Three-Phase PMSM Drives Using Current Injection
- Author
-
Kailiang Yu and Zheng Wang
- Subjects
Three-phase ,Computer science ,Control theory ,Torque ,Inverter ,Electrical and Electronic Engineering ,Synchronous motor ,Subspace topology ,Decoupling (electronics) ,Compensation (engineering) ,Voltage - Abstract
Compensation of voltage-source-inverter (VSI) nonlinearity is of great importance in parameter identification and position sensorless control method for motor drives. In this paper, a simple online compensation method has been proposed for the dual three-phase permanent-magnet synchronous motor (PMSM) drives. The key is to observe the change of voltage references contained VSI nonlinearity after injecting currents in z1z2-axis purposely. Compared to existing estimation methods for VSI nonlinearity, the proposed method not only can avoid the tedious and time-consuming off-line test for modeling inverter, but also can achieve real-time estimation without machine parameters. Moreover, the current injection on z1z2 subspace has slight influence on the dq subspace (the torque subspace) thanking to the multiply decoupling subspace of dual three-phase PMSM. The experiments have been given to verify the validity of the proposed method.
- Published
- 2022
61. Supervised Low-Rank Embedded Regression (SLRER) for Robust Subspace Learning
- Author
-
Guowei Yang, Tianming Zhan, Yu Yao, and Minghua Wan
- Subjects
Optimization problem ,Rank (linear algebra) ,Computer science ,business.industry ,Feature extraction ,Pattern recognition ,Regression ,Norm (mathematics) ,Outlier ,Media Technology ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Projection (set theory) ,Subspace topology - Abstract
Locality-preserving projection (LPP) has been widely used in feature extraction. However, LPP does not use data category information and uses the 2 L -norm for distance measurement, which is highly sensitive to outliers. In this paper, we consider the LPP weight matrix from a supervised perspective and combine the low-rank regression method to propose a new model to discover and extract features. By using the L2,1-norm to constrain the loss function and the regression matrix, not only is the sensitivity to outliers reduced but the low-rank condition of the regression matrix is also restricted. Then, we propose a solution to the optimization problem. Finally, we apply the method to a series of face databases, handwriting digital datasets and palmprint datasets to test the performance, and the experimental results show that this method is effective compared with some existing methods.
- Published
- 2022
62. Robust supervised discrete hashing
- Author
-
Wei Zhang, Xiangqin Dai, Yao Xiao, Xiangguang Dai, and Nian Zhang
- Subjects
Computer science ,business.industry ,Cognitive Neuroscience ,Hash function ,Cauchy distribution ,Pattern recognition ,Computer Science Applications ,Matrix (mathematics) ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Outlier ,Binary code ,Artificial intelligence ,Noise (video) ,business ,Image retrieval ,Computer Science::Databases ,Subspace topology ,Computer Science::Cryptography and Security - Abstract
In this paper, we proposed a more robust supervised hashing framework based on the Cauchy loss function and Supervised Discrete Hashing (SDH) called Robust Supervised Discrete Hashing (RSDH), which can learn a robust subspace consisted of binary codes. The Cauchy loss is used to measure the error between the label matrix and the product of the decomposed matrices. RSDH can not only reduce the outliers and noise of the hashing codes, but also achieve the more satisfactory retrieval effect. Image retrieval experiments demonstrate that RSDH performs better than the other hashing methods.
- Published
- 2022
63. Evolutionary Multitasking for Multiobjective Optimization With Subspace Alignment and Adaptive Differential Evolution
- Author
-
Zhengping Liang, Weiqi Liang, Hao Dong, Cheng Liu, and Zexuan Zhu
- Subjects
education.field_of_study ,Computer science ,business.industry ,Population ,Evolutionary algorithm ,Multi-objective optimization ,Evolutionary computation ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Differential evolution ,Human multitasking ,Artificial intelligence ,Electrical and Electronic Engineering ,education ,business ,Knowledge transfer ,Software ,Subspace topology ,Information Systems - Abstract
In contrast to the traditional single-tasking evolutionary algorithms, evolutionary multitasking (EMT) travels in the search space of multiple optimization tasks simultaneously. Through sharing knowledge across the tasks, EMT is able to enhance solving the optimization tasks. However, if knowledge transfer is not properly carried out, the performance of EMT might become unsatisfactory. To address this issue and improve the quality of knowledge transfer among the tasks, a novel multiobjective EMT algorithm based on subspace alignment and self-adaptive differential evolution (DE), namely, MOMFEA-SADE, is proposed in this article. Particularly, a mapping matrix obtained by subspace learning is used to transform the search space of the population and reduce the probability of negative knowledge transfer between tasks. In addition, DE characterized by a self-adaptive trial vector generation strategy is introduced to generate promising solutions based on previous experiences. The experimental results on multiobjective multi/many-tasking optimization test suites show that MOMFEA-SADE is superior or comparable to other state-of-the-art EMT algorithms. MOMFEA-SADE also won the Competition on Evolutionary Multitask Optimization (the multitask multiobjective optimization track) within IEEE 2019 Congress on Evolutionary Computation.
- Published
- 2022
64. Candidate Modulation Patterns Solution for Five-Phase PMSM Drive System
- Author
-
Zaixin Song, Yongcan Huang, Zhiping Dong, Chunhua Liu, and Senyi Liu
- Subjects
Computer science ,Phase (waves) ,Energy Engineering and Power Technology ,Transportation ,Harmonic analysis ,Model predictive control ,Control theory ,Modulation ,Automotive Engineering ,Harmonic ,Torque ,Electrical and Electronic Engineering ,Space vector modulation ,Subspace topology - Abstract
Model predictive control (MPC) schemes applied in multiphase permanent magnet synchronous motors (PMSMs) should provide the precise torque output and suppress the harmonic currents at the same time. Then, the trade-off between precise torque output and harmonic currents suppression is inevitable in the finite control set model predictive control (FCSMPC). To provide an alternative of this problem, this paper presents a new extension strategy of FCS-MPC applied in multiphase PMSMs. The novelty of this paper includes two parts. Firstly, the control sets are extended from the candidate voltage vectors to the candidate modulation patterns. These modulation patterns consist of switching signals with specific rules. The second one is the new cost function which realizes the harmonic currents suppression with the modulation patterns selection. Two groups of modulation patterns are developed to assess the performance of this structure. The first group is called “semi-controlled modulation pattern” which could provide the precise voltage vectors in the αβ subspace. The second group is called “fundamental torque-controlled modulation pattern” which could further suppress the harmonic currents. These two groups of modulation patterns could realize the required performance with a much lower switching frequency. Moreover, the parameter mismatches in the harmonic subspace caused by the uncertainties of the system and errors in the parameter values are also compensated in proposed MPC. Finally, the comparative experiments show the performance differences among two modulation patterns and those existing controllers.
- Published
- 2022
65. Optimizing Driver Nodes for Structural Controllability of Temporal Networks
- Author
-
Manikya Valli Srighakollapu, Ramkrishna Pasumarthy, and Rachel Kalpana Kalaimani
- Subjects
Control and Optimization ,Computer Networks and Communications ,Computer science ,Topology (electrical circuits) ,Topology ,Submodular set function ,Controllability ,Dimension (vector space) ,Control and Systems Engineering ,Signal Processing ,Node (circuits) ,Enhanced Data Rates for GSM Evolution ,Greedy algorithm ,Subspace topology - Abstract
We derive conditions for structural controllability of temporal networks that change topology and edge weights with time. The existing results for structural controllability of directed networks assume that all the edge weights are chosen independently of each other. The undirected case is challenging due to the constraints on the edge weights. We show that even with this additional restriction, the structural controllability results for the directed case are applicable to the undirected case. We further address two important issues; the first is optimizing the number of driver nodes to ensure the structural controllability of the temporal network. The second is to characterize the maximum reachable subspace when there are constraints on the number of driver nodes. Using the max-flow min-cut theorem, we show that the dimension of the reachable subspace is a submodular function of a set of driver nodes. Hence, we propose greedy algorithms with approximation guarantees to solve the above NP-hard problems. The results of the two case studies illustrate that the proposed greedy algorithm efficiently computes the optimum driver node set for both directed and undirected temporal networks
- Published
- 2022
66. Closed-loop time-varying continuous-time recursive subspace-based prediction via principle angles rotation
- Author
-
Liangliang Shang, Miao Yu, Jianchang Liu, and Ge Guo
- Subjects
0209 industrial biotechnology ,Applied Mathematics ,020208 electrical & electronic engineering ,Observable ,02 engineering and technology ,Linear subspace ,Fault detection and isolation ,Computer Science Applications ,Linear map ,Moment (mathematics) ,020901 industrial engineering & automation ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Observability ,Electrical and Electronic Engineering ,Instrumentation ,Rotation (mathematics) ,Subspace topology ,Mathematics - Abstract
This paper presents a closed-loop time-varying continuous-time recursive subspace-based prediction method utilizing principle angles rotation. A simple linear mapping can be provided by generalized Poisson moment functionals, which can deal with the time-derivatives problems of input–output Hankel matrices. The parity space employed in fault detection field is adopted instead of using the observable subspace. The system matrices are estimated consistently by the instrumental variable method and principal component analysis, which solves the identification problems of biased results for the system operating in closed-loop with a feedback controller. The system matrices are predicted by the principle angles rotation of the signal subspaces spanned from the extended observability matrices. The effectiveness of the proposed method is illustrated by the numerical simulations and real applications.
- Published
- 2022
67. Fuzzy K-Means Clustering With Discriminative Embedding
- Author
-
Feiping Nie, Xiaowei Zhao, Zhihui Li, Xuelong Li, and Rong Wang
- Subjects
Computer science ,business.industry ,Dimensionality reduction ,Pattern recognition ,02 engineering and technology ,Fuzzy logic ,Computer Science Applications ,Weighting ,Computational Theory and Mathematics ,Discriminative model ,Robustness (computer science) ,020204 information systems ,Principal component analysis ,0202 electrical engineering, electronic engineering, information engineering ,Embedding ,Artificial intelligence ,business ,Cluster analysis ,Subspace topology ,Membership function ,Information Systems - Abstract
Fuzzy K-Means (FKM) clustering is of great importance for analyzing unlabeled data. FKM algorithms assign each data point to multiple clusters with some degree of certainty measured by the membership function. In these methods, the fuzzy membership degree matrix is obtained based on the calculation of the distance between data points in the original space. However, this operation may lead to suboptimal results because of the influence of noises and redundant features. Besides, some FKM clustering methods ignore the importance of the weighting exponent. In this paper, we propose a novel FKM method called Fuzzy K-Means Clustering With Discriminative Embedding. Within this method, we simultaneously conduct dimensionality reduction along with fuzzy membership degree learning. To retain most information in the embedding subspace and improve the robustness of this method, principal component analysis is incorporated into our framework. An iterative optimization algorithm is proposed to solve the model. To validate the efficacy of the proposed method, we perform comprehensive analyses, including convergence behavior, parameter determination and computational complexity. Moreover, we also match a appropriate weighting exponent for each data set. Experimental results on benchmark data sets show that the proposed method is more discriminative and effective for clustering tasks.
- Published
- 2022
68. VoxelHop: Successive Subspace Learning for ALS Disease Classification Using Structural MRI
- Author
-
Xiaofeng Liu, Suma Babu, Fangxu Xing, Georges El Fakhri, C.-C. Jay Kuo, Thomas M Jenkins, Chao Yang, and Jonghye Woo
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Concatenation ,Computer Science - Computer Vision and Pattern Recognition ,Convolutional neural network ,Article ,Health Information Management ,Robustness (computer science) ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Electrical and Electronic Engineering ,business.industry ,Dimensionality reduction ,Deep learning ,Amyotrophic Lateral Sclerosis ,Image and Video Processing (eess.IV) ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Magnetic Resonance Imaging ,Backpropagation ,Regression ,Computer Science Applications ,Neural Networks, Computer ,Artificial intelligence ,business ,Subspace topology ,Biotechnology - Abstract
Deep learning has great potential for accurate detection and classification of diseases with medical imaging data, but the performance is often limited by the number of training datasets and memory requirements. In addition, many deep learning models are considered a "black-box," thereby often limiting their adoption in clinical applications. To address this, we present a successive subspace learning model, termed VoxelHop, for accurate classification of Amyotrophic Lateral Sclerosis (ALS) using T2-weighted structural MRI data. Compared with popular convolutional neural network (CNN) architectures, VoxelHop has modular and transparent structures with fewer parameters without any backpropagation, so it is well-suited to small dataset size and 3D imaging data. Our VoxelHop has four key components, including (1) sequential expansion of near-to-far neighborhood for multi-channel 3D data; (2) subspace approximation for unsupervised dimension reduction; (3) label-assisted regression for supervised dimension reduction; and (4) concatenation of features and classification between controls and patients. Our experimental results demonstrate that our framework using a total of 20 controls and 26 patients achieves an accuracy of 93.48 % and an AUC score of 0.9394 in differentiating patients from controls, even with a relatively small number of datasets, showing its robustness and effectiveness. Our thorough evaluations also show its validity and superiority to the state-of-the-art 3D CNN classification approaches. Our framework can easily be generalized to other classification tasks using different imaging modalities.
- Published
- 2022
69. A hybrid-line-and-curve search globalization technique for inexact Newton methods
- Author
-
Feng Nan Hwang and Shang-Rong Cai
- Subjects
Numerical Analysis ,Line search ,Applied Mathematics ,Backtracking line search ,Linear subspace ,Computational Mathematics ,symbols.namesake ,Line (geometry) ,Convergence (routing) ,symbols ,Applied mathematics ,Descent direction ,Newton's method ,Subspace topology ,Mathematics - Abstract
The backtracking line search (LS) is one of the most commonly used techniques for enhancing the robustness of Newton-type methods. The Newton method consists of two key steps: search and update. LS tries to find a decreasing-most updated point along with Newton's search direction with an appropriate damping factor from the current approximation. The determination of Newton's search direction relies only on current information. When Newton's search direction is a weak descent direction, the damping factor determined by LS can be unacceptably small, which often happens for the numerical solution of large, sparse systems of equations with strong local nonlinearity. As a result, the solution process falls into the vicious cycle between no update and almost the same search direction. The intermediate solution is trapped within the same region without any progress. This work proposes a new globalization strategy, namely, the hybrid line and curve search (HLCS) technique for Newton-type methods to resolve their potential failure problems when line-search is used. If the classical line search fails, we activate the curve search phase. In that case, we first decompose the solution space into two orthogonal subspaces based on the predicted value obtained from Newton's search direction, referred to “good” and “bad” subspaces. The bad one corresponds to the components causing the violation of the sufficient decrease condition. Next, we project the original predicted value on the good subspace and then perform the nonlinear elimination process to obtain the corrected solution on the bad subspace. Hopefully, the new update can satisfy the sufficient decrease condition to enhance the convergence of inexact Newton. As proof of concept, we present three numerical examples to illustrate the effectiveness of our proposed inexact Newton-HLCS approach.
- Published
- 2022
70. Robust Subspace Clustering With Low-Rank Structure Constraint
- Author
-
Xuelong Li, Wei Chang, Zhanxuan Hu, and Feiping Nie
- Subjects
Rank (linear algebra) ,Computer science ,Linear subspace ,Spectral clustering ,Computer Science Applications ,Constraint (information theory) ,ComputingMethodologies_PATTERNRECOGNITION ,Computational Theory and Mathematics ,Piecewise ,Representation (mathematics) ,Algorithm ,Integer programming ,Subspace topology ,Information Systems - Abstract
In this paper, a novel low-rank structural model is proposed for segmenting data drawn from a high-dimensional space. Our method is based on the fact that all groups clustered from a high-dimensional dataset are distributed in multiple low-rank subspaces. In general, it's a very difficult task to find the low-rank structures hidden in data. Different from the classical sparse subspace clustering (SSC) and low-rank representation (LRR) which all take two steps including building the affinity matrix and spectral clustering, we introduce a new rank constraint into our model. This constraint allows our model to learn a subspace indicator which can capture different clusters directly from the data without any postprocessing. To further approximate the rank constraint, a piecewise function is utilized as the relaxing item for the proposed model. Besides, under the subspace indicator constraints, the integer programming problem is avoided, which makes our algorithm more efficient and scalable. In addition, we prove the convergence of the proposed algorithm in theory and further discuss the general case in which subspaces don't pass through the origin. Experiment results on both synthetic and real-world datasets demonstrate that our algorithm significantly outperforms the state-of-the-art methods.
- Published
- 2022
71. Learning Clustering for Motion Segmentation
- Author
-
Xun Xu, Ce Zhu, Zhuwen Li, Long-Fah Cheong, and Le Zhang
- Subjects
business.industry ,Computer science ,Pattern recognition ,Perceptron ,Linear subspace ,Synthetic data ,Spectral clustering ,ComputingMethodologies_PATTERNRECOGNITION ,Media Technology ,Feature (machine learning) ,Embedding ,Artificial intelligence ,Electrical and Electronic Engineering ,Cluster analysis ,business ,Subspace topology - Abstract
Subspace clustering has been extensively studied from the hypothesis-and-test, algebraic, and spectral clustering-based perspectives. Most assume that only a single type/class of subspace is present. Generalizations to multiple types are non-trivial, plagued by challenges such as choice of types and numbers of models, sampling imbalance and parameter tuning. In many real world problems, data may not lie perfectly on a linear subspace and hand designed linear subspace models may not fit into these situations. In this work, we formulate the multi-type subspace clustering problem as one of learning non-linear subspace filters via deep multi-layer perceptrons (mlps). The response to the learnt subspace filters serve as the feature embedding that is clustering-friendly, i.e., points of the same clusters will be embedded closer together through the network. For inference, we apply K-means to the network output to cluster the data. Experiments are carried out on synthetic data and real world motion segmentation problems, producing state-of-the-art results.
- Published
- 2022
72. Investigation of the radar parameter subspace for different beam-park simulations with the TIRA system
- Author
-
Stijn Lemmens, Jens Rosebrock, Sven Kevin Flegel, Matteo Budoni, Jan Siminski, Itawel Oumrou Maouloud, Claudio Carloni, Delphine Cerutti-Maori, and Publica
- Subjects
Signal processing ,Estimation theory ,Computer science ,Matched filter ,Aerospace Engineering ,Parameter space ,TIRA ,law.invention ,Signal-to-noise ratio ,law ,Radar ,Safety, Risk, Reliability and Quality ,Algorithm ,Subspace topology - Abstract
Radar Beam-Park Experiments (BPEs) are regularly conducted in order to get insights into the statistical distribution of the space debris environment, especially at low altitudes. Signal processing techniques are necessary to process the data collected during BPEs. An improvement in these techniques may result in a corresponding improvement in terms of radar detection performance and space object parameter estimation. To this end, a transition from incoherent to coherent pulse integration could be of help, thanks to the resulting improvement in terms of Signal to Noise Ratio (SNR). Before assessing the potential of new signal processing schemes, the expected variations of the radar observables over the acquisition time have to be investigated. Indeed, deriving a priori bounds on the possible values the observables can assume, would allow adapting the BPE settings (e.g., the radar waveform or the boundaries of the matched filter) in order to optimize the processing speed and the overall detection performance. To carry out this investigation, two BPEs were simulated and the resulting beam crossing lists were exploited to understand how the satellite parameter space maps into the radar parameter subspace. This paper presents and discusses the results of the two simulations.
- Published
- 2022
73. Pose error compensation based on joint space division for 6-DOF robot manipulators
- Author
-
Yingjie Guo, Qunlin Cheng, Weidong Zhu, Haijin Wang, Siming Cao, and Yinglin Ke
- Subjects
Computer science ,business.industry ,Orientation (computer vision) ,General Engineering ,Point set registration ,Linear subspace ,Transformation (function) ,Position (vector) ,Laser tracker ,Inverse distance weighting ,Computer vision ,Artificial intelligence ,business ,Subspace topology - Abstract
This paper introduces a pose error compensation method performed in joint space to improve both position and orientation accuracy of industrial robots. Joint space division is proposed to achieve dimension reduction to reduce the workload of pose error sampling. The six-dimensional joint space is divided into two three-dimensional subspaces by the wrist center. Each subspace is discretized by a sequence of grid elements of which grid vertices are considered as sample points. Based on point set registration, two spatial databases of pose errors are built in the two subspaces respectively using a laser tracker. For a given robot posture, the pose errors generated in the two subspaces can be separately estimated using spatial interpolation based on inverse distance weighting algorithm. Formed by the estimated values, two transformation error matrices are introduced to compensate for the pose errors of the two subspaces. A Comau NJ370–3.0 manipulator was employed as an experimental platform to evaluate the effectiveness of the proposed method. Experimental results demonstrated the maximum position error was reduced by 96.06% to below 0.334 mm, and the maximum orientation errors have decreased to below 0.027° after compensation was performed.
- Published
- 2022
74. Locality-Constrained Discriminative Matrix Regression for Robust Face Identification
- Author
-
Xianzhong Zhou, Huaxiong Li, Yuhua Qian, Chao Zhang, and Chunlin Chen
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Locality ,Matrix norm ,Regression analysis ,Pattern recognition ,Computer Science Applications ,Matrix (mathematics) ,Discriminative model ,Artificial Intelligence ,Robustness (computer science) ,Artificial intelligence ,Linear combination ,business ,Software ,Subspace topology - Abstract
Regression-based methods have been widely applied in face identification, which attempts to approximately represent a query sample as a linear combination of all training samples. Recently, a matrix regression model based on nuclear norm has been proposed and shown strong robustness to structural noises. However, it may ignore two important issues: the label information and local relationship of data. In this article, a novel robust representation method called locality-constrained discriminative matrix regression (LDMR) is proposed, which takes label information and locality structure into account. Instead of focusing on the representation coefficients, LDMR directly imposes constraints on representation components by fully considering the label information, which has a closer connection to identification process. The locality structure characterized by subspace distances is used to learn class weights, and the correct class is forced to make more contribution to representation. Furthermore, the class weights are also incorporated into a competitive constraint on the representation components, which reduces the pairwise correlations between different classes and enhances the competitive relationships among all classes. An iterative optimization algorithm is presented to solve LDMR. Experiments on several benchmark data sets demonstrate that LDMR outperforms some state-of-the-art regression-based methods.
- Published
- 2022
75. Joint Adaptive Dual Graph and Feature Selection for Domain Adaptation
- Author
-
Wei Wang, Mingshi Yan, Zhengming Ding, Fuming Sun, Jing Sun, Haojie Li, and Zhi-Hui Wang
- Subjects
Contextual image classification ,business.industry ,Computer science ,Feature vector ,Feature selection ,Pattern recognition ,Domain (software engineering) ,Non-negative matrix factorization ,Dual graph ,Media Technology ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Subspace topology ,Curse of dimensionality - Abstract
Domain adaptation aims to exploit domain-invariant features by aligning the cross-domain distributions in the manifold subspace for applying the classifier trained on the source domain to the target domain. However, two limitations may still deteriorate their performances: (1) the influences of noisy or irrelevant features in the original feature space are ignored, which may unexpectedly hurt the classification of target samples; (2) the graph constructed directly in the original data space cannot accurately capture the inherent local manifold structures of high-dimensional data due to the curse of dimensionality, which may seriously mislead the transferable features learning. In this paper, we propose a novel approach to address these problems, referred to as joint Adaptive Dual Graph and Feature Selection for domain adaptation (ADGFS). Specifically, feature selection can characterize the relative importance of different features through a scaling factor, which enables ADGFS to not only reduce the impacts of noisy or irrelevant features on knowledge transfer but also learn informative domain-invariant features. Meanwhile, ADGFS adaptively optimizes the dual graph by learning the similarity matrices of both instance-level and feature-level graphs in the projected low-dimensional manifold subspace rather than the original high-dimensional space, such that the intrinsic local manifold structures of data can be captured precisely. Moreover, ADGFS simultaneously aligns the marginal and conditional probability distributions in the nonnegative matrix factorization framework to narrow the distribution discrepancies between the two different domains, which can adequately transfer knowledge from the source domain to the target domain. Comprehensive experiments on four benchmark datasets can demonstrate that the effectiveness of the proposed approach in cross-domain image classification.
- Published
- 2022
76. BrePartition: Optimized High-Dimensional kNN Search With Bregman Distances
- Author
-
Yu Gu, Ge Yu, Yang Song, and Rui Zhang
- Subjects
Scheme (programming language) ,Structure (mathematical logic) ,Signal processing ,Computer science ,High dimensional ,Linear subspace ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,Computational Theory and Mathematics ,Search algorithm ,computer ,Algorithm ,Subspace topology ,Information Systems ,Curse of dimensionality ,computer.programming_language - Abstract
Bregman distances are widely used in machine learning, speech recognition and signal processing, and kNN search with Bregman distances has become increasingly important with the rapid advances of multimedia applications. Data in multimedia applications such as images and videos are commonly transformed into space of hundreds of dimensions. Such high-dimensional space has posed significant challenges for existing kNN search algorithms with Bregman distances, which could only handle data of medium dimensionality (typically less than 100). This paper addresses the urgent problem of high-dimensional kNN search with Bregman distances. We propose a novel partition-filter-refinement framework. Specifically, we propose an optimized dimensionality partitioning scheme to solve several non-trivial issues. First, an effective bound from each partitioned subspace to obtain exact kNN results is derived. Second, we conduct an in-depth analysis of the optimized number of partitions and devise an effective strategy for partitioning. Third, we design an efficient integrated index structure for all the subspaces together to accelerate the query processing. Moreover, we extend our exact solution to an approximate version by a trade-off between the accuracy and efficiency. Experimental results on four real-world datasets and a synthetic dataset show the clear advantage of our method in comparison to state-of-the-art algorithms.
- Published
- 2022
77. Some iterative approaches for Sylvester tensor equations, Part II: A tensor format of Simpler variant of GCRO-based methods
- Author
-
Farid Saberi-Movahed, Lakhdar Elbouyahyaoui, Mohammed Heyouni, and Azita Tajaddini
- Subjects
Computational Mathematics ,Numerical Analysis ,Approximation error ,Applied Mathematics ,Applied mathematics ,Acceleration (differential geometry) ,Krylov subspace ,Tensor ,Residual ,Generalized minimal residual method ,Orthogonalization ,Subspace topology ,Mathematics - Abstract
In the second part of this two-part work, another accelerator method based on the tensor format is established for solving the Sylvester tensor equations. This acceleration approach, which is called SGCRO−BTF, is based on the idea of inner-outer iteration used in the generalized conjugate residual with inner orthogonalization (GCRO) method. In SGCRO−BTF, the Simpler GMRES method based on the tensor format (SGMRES−BTF) is applied to the inner iteration, and the generalized conjugate residual based on the tensor format (GCR−BTF) method is used in the outer iteration. Furthermore, SGCRO−BTF seeks an approximate solution over a tensor subspace spanned by the approximation error tensors produced during the previous outer iterations of SGCRO−BTF and a tensor Krylov subspace constructed by the inner iteration. In order to reduce the computational storage in the outer iteration, the truncated version of SGCRO−BTF is presented, in which only some of the last approximation error tensors are kept and will be then added to the tensor Krylov subspace in order to obtain a new search subspace. Finally, the proposed methods are tested on a set of experiments and compared to some conventional and state-of-the-art Krylov subspace methods based on the tensor format. Experimental results indicate that the truncated version of SGCRO−BTF is particularly effective for solving the Sylvester tensor equations.
- Published
- 2022
78. Pareto Optimal Weighting Factor Design of Predictive Current Controller of a Six-Phase Induction Machine Based on Particle Swarm Optimization Algorithm
- Author
-
Mateja Novak, Jose Rodriguez, Tomislav Dragicevic, Jorge Rodas, Jesus Doval-Gandoy, Hector Fretes, Nicolas Gomez, and Victor A. Gomez
- Subjects
weighting factor (WF) ,Computer science ,Energy Engineering and Power Technology ,Context (language use) ,Guidelines ,Tuning ,Control theory ,Model predictive control (MPC) ,Power electronics ,Convergence (routing) ,Weighting factor design ,Model predictive control ,stator currents control ,Electrical and Electronic Engineering ,particle swarm optimization (PSO) ,particle swarm optimization ,Artificial neural network ,Stators ,Particle swarm optimization ,Weighting ,Cost function ,Current control ,multiphase induction machine ,Algorithm ,Subspace topology ,pareto optimal - Abstract
Finite-set model predictive control (FS-MPC) as predictive current control (PCC) is considered an exciting option for the stator current control of multiphase machines due to their control flexibility and easy inclusion of constraints. The weighting factors (WFs) of PCC must be tuned for the variables of interest, such as the machine losses x-y currents, typically performed by trial-and-error procedure. Tuning methods based on artificial neural network (ANN) or the coefficient of variation were proposed for three-phase inverter and motor drive applications. However, the extension of this concept to the multiphase machine application is not straightforward, and only empirical procedures have been reported. In this context, this article proposes an optimal method to tune the WF of the PCC based on the multiobjective particle swarm optimization (MOPSO) algorithm. A Pareto dominance concept is used for the MOPSO to find the optimal WF values for the PCC, comparing parameters of root-mean-square error of the stator tracking currents. The proposed method offers a systematic approach to the WF selection, with an algorithm of easy implementation with direct control over the size of the search space and the speed of convergence. Simulation and experimental results in steady-state and transient conditions are provided to validate the proposed offline tuning procedure of the PCC of a six-phase induction machine. The improvements of RMSE can be more than 500% for x-y subspace, with minor effect in α -β subspace. Finally, the proposed method is extended to a more complex cost function, and the results are compared with an ANN approach.
- Published
- 2022
79. Physical Model-Inspired Deep Unrolling Network for Solving Nonlinear Inverse Scattering Problems
- Author
-
Huilin Zhou, Yuhao Wang, Jian Liu, Qiegen Liu, and Tao Ouyang
- Subjects
Nonlinear system ,Artificial neural network ,Augmented Lagrangian method ,business.industry ,Iterative method ,Deep learning ,Inverse scattering problem ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Regularization (mathematics) ,Algorithm ,Subspace topology - Abstract
In this paper, to bridge the gap between the traditional model-based methods and data-driven deep learning schemes, we propose a physical model-inspired deep unrolling network for solving nonlinear inverse scattering problems, termed PM-Net. The proposed end-to-end network is formed by two consequent steps. First, an augmented Lagrangian method is introduced to transform a constrained objective function to be an unconstrained optimization. In addition, it is further decomposed into four quasi-linear subproblems. Second, we unfold the iterative scheme into a layer-wise deep neural network. Each subproblem is mapped into a module of the deep unrolling network. In PM-Net, these variables including the weight, the regularization of contrast and other parameters are learned and updated alternately by corresponding network layers. PM-Net effectively combines neural network with the knowledge of underlying physics as well as traditional techniques. Unlike existing networks, PM-Net explicitly exploits contrast source and contrast modules. Compared to traditional iterative methods, the performance of PM-Net is comparable or even better than subspace-based optimization method in the high noise level circumstance. Compared to state-of-the-art learning approaches, not only less network parameters need to be learned, but also better performance is achieved by PM-Net.
- Published
- 2022
80. Transfer Collaborative Fuzzy Clustering in Distributed Peer-to-Peer Networks
- Author
-
Jin Zhou, C. L. Philip Chen, Bozhan Dang, Yingxu Wang, Long Chen, Lin Wang, Rongrong Wang, Shiyuan Han, Yuehui Chen, and Tong Zhang
- Subjects
Iterative and incremental development ,Fuzzy clustering ,Computer science ,Applied Mathematics ,02 engineering and technology ,Peer-to-peer ,computer.software_genre ,Regularization (mathematics) ,Computational Theory and Mathematics ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Transfer of learning ,Cluster analysis ,computer ,Subspace topology ,Sparse matrix - Abstract
The traditional collaborative fuzzy clustering can effectively perform data clustering in distributed peer-to-peer (P2P) networks, which is an impossible task to complete for the centralized clustering methods due to privacy and security requirements or network transmission technology constraints. But it will increase the number of clustering iterations and lead to lower efficiency of the clustering. Moreover, the collaborative mechanism hidden in the iterative process of clustering cannot be well revealed and explained. In this paper, a novel series of transfer collaborative fuzzy clustering algorithms are proposed to solve these issues. In the first basic algorithm, the transfer learning among neighbor nodes vividly expresses the collaborative mechanism and enhances the information collaboration to accelerate the convergence of fuzzy clustering. Meanwhile, neighbor nodes can learn the knowledge from each other to further promote their respective clustering performance. Then, an improved version with the learning-rate-adjustable strategy instead of fixed values is designed to highlight the different influence between neighbor nodes, and the appropriate learning rates between neighbor nodes are achieved to ensure the stable clustering accuracy. Finally, two extended versions with the attribute-weight-entropy regularization technique are presented for the clustering of high dimensional sparse data and the extraction of important subspace features. Experiments show the efficiency of the proposed algorithms compared with the related prototype-based clustering methods.
- Published
- 2022
81. Millidegree-Level Direction-of-Arrival Estimation and Tracking for Terahertz Ultra-Massive MIMO Systems
- Author
-
Chong Han, Yuhang Chen, Meixia Tao, and Longfei Yan
- Subjects
Beamforming ,business.industry ,Terahertz radiation ,Computer science ,Applied Mathematics ,Direction of arrival ,Convolutional neural network ,Computer Science Applications ,Electronic engineering ,Wireless ,Overhead (computing) ,Electrical and Electronic Engineering ,business ,Subspace topology ,Communication channel - Abstract
Terahertz (0.1-10 THz) wireless communications are expected to meet 100+ Gbps data rates for 6G communications. Being able to combat the distance limitation with reduced hardware complexity, ultra-massive multiple-input multiple-output (UM-MIMO) systems with hybrid dynamic array-of-subarrays (DAoSA) beamforming are a promising technology for THz wireless communications. However, fundamental challenges in THz DAoSA systems include millidegree-level three-dimensional direction-of-arrival (DoA) estimation and millisecond-level beam tracking with reduced pilot overhead. To address these challenges, an off-grid subspace-based DAoSA-MUSIC and a deep convolutional neural network (DCNN) methods are proposed for DoA estimation. Furthermore, by exploiting the temporal correlations of the channel variation, an augmented DAoSA-MUSIC-T and a convolutional long short-term memory (ConvLSTM) solutions are further developed to realize DoA tracking. Extensive simulations and comparisons on the proposed subspace- and deep-learning-based algorithms are conducted. Results show that both DAoSA-MUSIC and DCNN achieve super-resolution DoA estimation and outperform existing solutions, while DCNN performs better than DAoSA-MUSIC at a high signal-to-noise ratio. Moreover, DAoSA-MUSIC-T and ConvLSTM can capture fleeting DoA variation with an accuracy of 0.1° within milliseconds, and reduce 50% pilot overhead. Compared to DAoSA-MUSIC-T, ConvLSTM can tolerate large angle variation and remain robust over a long duration.
- Published
- 2022
82. 2-D DOA Estimation of Incoherently Distributed Sources Considering Gain-Phase Perturbations in Massive MIMO Systems
- Author
-
He Xu, Shuai Liu, Ye Tian, Wei Liu, and Zhiyan Dong
- Subjects
Estimation ,Computer science ,Applied Mathematics ,MIMO ,Phase (waves) ,Elevation ,Computer Science Applications ,Azimuth ,Base station ,Electrical and Electronic Engineering ,Algorithm ,Subspace topology ,Computer Science::Information Theory ,Mimo systems - Abstract
In massive multiple-input multiple-output (MIMO) systems, accurate direction-of-arrival (DOA) estimation is important for the base station (BS) to perform effective downlink beamforming. So far, there have been few reports on DOA estimation considering gain-phase perturbations in massive MIMO systems. However, gain-phase perturbations indeed exist in practical applications and cannot be ignored. In this paper, an efficient method for two-dimensional (2-D) DOA estimation of incoherently distributed (ID) sources considering array gain-phase perturbations is proposed for massive MIMO systems. Firstly, a shift invariance structure is established in the subspace framework, and a constrained optimization problem is formulated to estimate the nominal azimuth and elevation DOAs as well as gain-phase perturbations with closed-form expressions, under the assumption that some of the BS antennas are well calibrated; secondly, the corresponding angular spreads are obtained with the aid of the estimated gain-phase perturbations. Theoretical analysis and an approximate Cramér-Rao bound are also provided. An improved estimation performance is achieved by the proposed method as demonstrated by numerical simulations.
- Published
- 2022
83. Approach for Topography-Dependent Clutter Suppression in a Spaceborne Surveillance Radar System Based on Adaptive Broadening Processing
- Author
-
Xingzhao Liu, Guisheng Liao, Jiangyuan Chen, Yongyan Sun, Junli Chen, Guozhong Chen, and Penghui Huang
- Subjects
Mathematics::Commutative Algebra ,Computer science ,Terrain ,Geotechnical Engineering and Engineering Geology ,Interference (wave propagation) ,law.invention ,Computer Science::Robotics ,Computer Science::Systems and Control ,law ,Computer Science::Computer Vision and Pattern Recognition ,Trajectory ,Clutter ,Electrical and Electronic Engineering ,Radar ,Secondary surveillance radar ,Algorithm ,Subspace topology ,Eigenvalues and eigenvectors - Abstract
In this letter, a novel method is proposed to suppress the terrain fluctuation clutter based on adaptive broadening processing. In the proposed algorithm, the flat interference phase is first compensated according to the priori radar system parameters. Then, according to the space-time trajectory distribution of clutter edge, the level of clutter Doppler spread in virtue of crab effect is estimated by a cost function related to the clutter eigenvector matrix. Finally, after calculating the clutter suppression weight vector according to the broadened clutter subspace, the non-stationary ground clutter can be robustly rejected. The validity of the proposed method is verified by both the simulated and real-measured multichannel radar data.
- Published
- 2022
84. High-Resolution SAR Image Classification Using Subspace Wavelet Encoding Network
- Author
-
Pengfei Liu, Peng Wang, and Kang Ni
- Subjects
Wavelet ,Contextual image classification ,Computer science ,business.industry ,Encoding (memory) ,High resolution ,Pattern recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology ,business ,Subspace topology - Published
- 2022
85. Hyperspectral Image Stripe Detection and Correction Using Gabor Filters and Subspace Representation
- Author
-
Zhicheng Wang, Bing Zhang, Lianru Gao, Lina Zhuang, Michael K. Ng, and Yashinov Aziz
- Subjects
Computer science ,business.industry ,Noise reduction ,Feature extraction ,Inpainting ,Hyperspectral imaging ,Pattern recognition ,Geotechnical Engineering and Engineering Geology ,Matrix decomposition ,Gabor filter ,Data_FILES ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Subspace topology ,Sparse matrix - Abstract
Hyperspectral images (HSIs) exist in directional stripes commonly due to the failure of pushbroom acquisition. These stripes are not only vertically and horizontally oriented but also tend to be oblique. Furthermore, they can also be aperiodic and heavy. To address this problem, we propose a hyperspectral destriping algorithm, namely, GF-destriping. Taking advantage of the high sparsity and strong directionality of stripes in HSIs, Gabor filters are used to detect the stripes band by band first, and then, an advanced inpainting method, FastHyIn, is used to recover to the striped image. The numerical experiments on simulated data and real data sets show that our proposed algorithm is efficient and superior to state-of-the-art HSI destriping algorithms.
- Published
- 2022
86. A Novel Convolutional Autoencoder-Based Clutter Removal Method for Buried Threat Detection in Ground-Penetrating Radar
- Author
-
Eyyup Temlioglu and Isin Erer
- Subjects
Computer science ,business.industry ,Pattern recognition ,Sparse approximation ,Autoencoder ,law.invention ,Convolution ,Non-negative matrix factorization ,law ,Ground-penetrating radar ,General Earth and Planetary Sciences ,Clutter ,Artificial intelligence ,Electrical and Electronic Engineering ,Radar ,business ,Subspace topology - Abstract
The clutter encountered in ground-penetrating radar (GPR) systems seriously affects the performance of the subsurface target detection methods. A new clutter removal method based on convolutional autoencoders (CAEs) is introduced. The raw GPR image is encoded via successive convolution and pooling layers and then decoded to provide the clutter-free GPR image. The loss function is defined in terms of the reference clutter-free target image and the decoder output is optimized to learn the weight coefficients from the raw data. The method is compared to the conventional subspace methods, recently proposed nonnegative matrix factorization, as well as low-rank and sparse decomposition (LRSD) methods and dictionary separation-based morphological component analysis. CAE and its deeper version deep CAE (DCAE) are trained by several scenarios generated by the electromagnetic simulation tool gprMax. Simulation results demonstrate the effectiveness of the proposed method for challenging scenarios. While for real GPR image, the simulated data trained networks remain slightly behind the LRSD methods for the dry case, nonetheless, they outperform the aforementioned processing techniques for the more challenging wet case.
- Published
- 2022
87. Semisupervised Classification With Novel Graph Construction for High-Dimensional Data
- Author
-
Hau-San Wong, Lianglun Cheng, Fengxu Ye, C. L. Philip Chen, Zhiwen Yu, Wenming Cao, Kaixiang Yang, and Jane You
- Subjects
Clustering high-dimensional data ,Computer Networks and Communications ,business.industry ,Computer science ,Similarity matrix ,Pattern recognition ,Graph ,Computer Science Applications ,Data set ,Artificial Intelligence ,Leverage (statistics) ,Graph (abstract data type) ,Artificial intelligence ,business ,Classifier (UML) ,Software ,Subspace topology - Abstract
Graph-based methods have achieved impressive performance on semisupervised classification (SSC). Traditional graph-based methods have two main drawbacks. First, the graph is predefined before training a classifier, which does not leverage the interactions between the classifier training and similarity matrix learning. Second, when handling high-dimensional data with noisy or redundant features, the graph constructed in the original input space is actually unsuitable and may lead to poor performance. In this article, we propose an SSC method with novel graph construction (SSC-NGC), in which the similarity matrix is optimized in both label space and an additional subspace to get a better and more robust result than in original data space. Furthermore, to obtain a high-quality subspace, we learn the projection matrix of the additional subspace by preserving the local and global structure of the data. Finally, we intergrade the classifier training, the graph construction, and the subspace learning into a unified framework. With this framework, the classifier parameters, similarity matrix, and projection matrix of subspace are adaptively learned in an iterative scheme to obtain an optimal joint result. We conduct extensive comparative experiments against state-of-the-art methods over multiple real-world data sets. Experimental results demonstrate the superiority of the proposed method over other state-of-the-art algorithms.
- Published
- 2022
88. Localization and size estimation for breaks in nuclear power plants
- Author
-
Ching Chen, Shun-Chi Wu, Te-Chuan Wang, Yuh-Ming Ferng, and Ting-Han Lin
- Subjects
Break size estimation ,business.industry ,Computer science ,Noise (signal processing) ,Deep learning ,TK9001-9401 ,Regression analysis ,law.invention ,Nuclear Energy and Engineering ,law ,Position (vector) ,Nuclear power plant ,Nuclear engineering. Atomic power ,Artificial intelligence ,Isolation (database systems) ,Break localization ,business ,Algorithm ,Subspace topology ,Event (probability theory) ,Multiple signal classification (MUSIC) - Abstract
Several algorithms for nuclear power plant (NPP) break event detection, isolation, localization, and size estimation are proposed. A break event can be promptly detected and isolated after its occurrence by simultaneously monitoring changes in the sensing readings and by employing an interquartile range-based isolation scheme. By considering the multi-sensor data block of a break to be rank-one, it can be located as the position whose lead field vector is most orthogonal to the noise subspace of that data block using the Multiple Signal Classification (MUSIC) algorithm. Owing to the flexibility of deep neural networks in selecting the best regression model for the available data, we can estimate the break size using multiple-sensor recordings of the break regardless of the sensor types. The efficacy of the proposed algorithms was evaluated using the data generated by Maanshan NPP simulator. The experimental results demonstrated that the MUSIC method could distinguish two near breaks. However, if the two breaks were close and of small sizes, the MUSIC method might wrongly locate them. The break sizes estimated by the proposed deep learning model were close to their actual values, but relative errors of more than 8% were seen while estimating small breaks’ sizes.
- Published
- 2022
89. Fusion of Hyperspectral and Multispectral Images Accounting for Localized Inter-Image Changes
- Author
-
Sen Jia, Qingquan Li, Jun Zhou, Meng Xu, and Xiyou Fu
- Subjects
Fusion ,business.industry ,Computer science ,Multispectral image ,Hyperspectral imaging ,Pattern recognition ,Residual ,Matrix decomposition ,Data set ,General Earth and Planetary Sciences ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Image resolution ,Subspace topology - Abstract
The high spectral resolution of hyperspectral images (HSIs) generally comes at the expense of low spatial resolution, which hinders the application of HSIs. Fusing an HSI and a multispectral image (MSI) from different sensors to get an image with high spatial and spectral resolution is an economic and effective approach, but localized spatial and spectral changes between images acquired at different time instants can have negative impacts on the fusion results, which has rarely been considered in many fusion methods. In this paper, we propose a novel group sparsity constrained fusion method to fuse hyperspectral and multispectral images based on the matrix factorization. Specifically, we imposed l2,1 norm on the residual term of the MSI to account for the localized inter-image changes occurring during the acquisition of the hyperspectral and multispectral images. Further, by exploiting the plug-and-play framework, we plugged a state-of-the-art denoiser, namely BM3D, as the prior of the subspace coefficients. We refer to the proposed fusion method as group sparsity constrained fusion method (GSFus). We performed fusion experiments on two kinds of datasets, i.e. with and without obvious localized changes between the HSIs and MSIs, and a full resolution data set. Extensive experiments in comparison with seven state-of-the-art fusion methods suggest that the proposed fusion method is more effective on fusing hyperspectral and multispectral images than the competitors.
- Published
- 2022
90. A Subspace Projection Approach for Clutter Mitigation in Holographic Subsurface Imaging
- Author
-
Tao Liu, Zhihua He, Chen Cheng, Xiaoji Song, and Yi Su
- Subjects
Cross-correlation ,Computer science ,business.industry ,Pattern recognition ,Geotechnical Engineering and Engineering Geology ,Signal ,Linear subspace ,law.invention ,law ,Singular value decomposition ,Clutter ,Artificial intelligence ,Electrical and Electronic Engineering ,Radar ,Projection (set theory) ,business ,Subspace topology - Abstract
The holographic subsurface radar (HSR) is recognized as an effective remote sensing modality for the detection of shallowly buried objects with a high-resolution image in plain view. However, subsurface detection with HSR is prone to be impaired by clutter contamination, which often obscures the target response. In this letter, a novel clutter mitigation method combining singular value decomposition (SVD) and response cross correlation analysis is presented. The proposed method first applies SVD to decompose the radar data matrix to a number of singular components. Furthermore, the signal cross correlation characteristics are analyzed to demonstrate that the variance of left singular vectors is directly proportional to the target proportion in radar data. Then, target and clutter subspaces can be identified by maximizing the defined weighted target-to-clutter ratio (WTCR). Results of numerical simulation and laboratory experiments corroborate the effectiveness of the proposed method in reducing clutter while preserving the target image.
- Published
- 2022
91. Diagonalized Low-Rank Learning for Hyperspectral Image Classification
- Author
-
Changda Xing, Yiliu Liu, Meiling Wang, Zhisheng Wang, and Chaowei Duan
- Subjects
Pixel ,Computer science ,business.industry ,Feature extraction ,Hyperspectral imaging ,Pattern recognition ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Discriminative model ,Feature (computer vision) ,Classifier (linguistics) ,General Earth and Planetary Sciences ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Subspace topology - Abstract
Hyperspectral image (HSI) classification is a current research hotspot. Most existing methods usually export discriminative features with low-quality distribution and low information utilization, which may induce classification performance degeneration. To remedy such deficiencies, we propose a diagonalized low-rank learning (DLRL) model for HSI classification in this study. Specifically, a classwise regularization is used to capture the classwise block-diagonal structure of low-rank representation, which can further cluster the represented HSI pixels from one class into the same subspace and extract features with well-ordered distribution. Such a regularization assists to easily and correctly classify HSIs. In addition, we combine sparsity and collaboration to extract more discriminative features for guaranteeing high information utilization, i.e., a tradeoff of sparsity and collaboration is sought to acquire both correlations among HSI pixels and characteristics of each pixel. By this way, rich information in the HSI can be fully used for good feature extraction. Further, the estimated feature representation is used as an input to the support vector machine (SVM) classifier for HSI classification. Extensive experiments have been done to validate that the proposed DLRL method achieves much classification performance in contrast to several state-of-the-art algorithms.
- Published
- 2022
92. Local Discriminant Subspace Learning for Gas Sensor Drift Problem
- Author
-
Shifeng Guo, Zhengkun Yi, Xinyu Wu, Wanfeng Shang, and Tiantian Xu
- Subjects
Computer science ,Linear discriminant analysis ,Computer Science Applications ,Compensation (engineering) ,Human-Computer Interaction ,ComputingMethodologies_PATTERNRECOGNITION ,Discriminant ,Control and Systems Engineering ,Electrical and Electronic Engineering ,Projection (set theory) ,Algorithm ,Software ,Subspace topology ,Eigendecomposition of a matrix - Abstract
Sensor drift is one of the severe issues that gas sensors suffer from. To alleviate the sensor drift problem, a gas sensor drift compensation approach is proposed based on local discriminant subspace projection (LDSP). The proposed approach aims to find a subspace to reduce the distribution difference between two domains, i.e., the source and target domain. Similar to domain regularized component analysis (DRCA) which is a recently proposed sensor drift correction method, the mean distribution discrepancy is minimized in the common subspace in our approach. LDSP extends DRCA in two aspects, i.e., it not only takes the label information of the source data into consideration to reduce the possibility of the case that samples in the subspace with different class labels stay close to each other, but also borrows the idea of locality-preserving projection to deal with multimodal data. Specifically, inspired by local Fisher discriminant analysis (LFDA), the label information is utilized to maximize the local between-class variance of source data in the latent common subspace and simultaneously minimize the local within-class variance. The formulation of LDSP is a generalized eigenvalue problem that can be readily solved. The experimental results have shown the proposed method outperforms other gas sensor drift compensation methods in terms of classification accuracy on two public gas sensor drift datasets.
- Published
- 2022
93. A Continuous Teleoperation Subspace With Empirical and Algorithmic Mapping Algorithms for Nonanthropomorphic Hands
- Author
-
Maximilian Haas-Heger, Cassie Meeker, and Matei Ciocarlie
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,Robot manipulator ,Robot hand ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Kinematics ,Computer Science - Robotics ,Control and Systems Engineering ,Mapping algorithm ,Teleoperation ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Robotics (cs.RO) ,Subspace topology - Abstract
Teleoperation is a valuable tool for robotic manipulators in highly unstructured environments. However, finding an intuitive mapping between a human hand and a non-anthropomorphic robot hand can be difficult, due to the hands' dissimilar kinematics. In this paper, we seek to create a mapping between the human hand and a fully actuated, non-anthropomorphic robot hand that is intuitive enough to enable effective real-time teleoperation, even for novice users. To accomplish this, we propose a low-dimensional teleoperation subspace which can be used as an intermediary for mapping between hand pose spaces. We present two different methods to define the teleoperation subspace: an empirical definition, which requires a person to define hand motions in an intuitive, hand-specific way, and an algorithmic definition, which is kinematically independent, and uses objects to define the subspace. We use each of these definitions to create a teleoperation mapping for different hands. One of the main contributions of this paper is the validation of both the empirical and algorithmic mappings with teleoperation experiments controlled by ten novices and performed on two kinematically distinct hands. The experiments show that the proposed subspace is relevant to teleoperation, intuitive enough to enable control by novices, and can generalize to non-anthropomorphic hands with different kinematics., 14 pages, 6 tables, 8 figures, accepted October 2020 IEEE T-ASE
- Published
- 2022
94. First-Order Sea Clutter Suppression for High-Frequency Surface Wave Radar Using Orthogonal Projection in Spatial–Temporal Domain
- Author
-
Chen Zhao, Zezong Chen, Fan Ding, and Jian Li
- Subjects
Covariance matrix ,Acoustics ,Orthographic projection ,Geotechnical Engineering and Engineering Geology ,law.invention ,symbols.namesake ,law ,Surface wave ,symbols ,Clutter ,Electrical and Electronic Engineering ,Radar ,Doppler effect ,Geology ,Eigendecomposition of a matrix ,Subspace topology - Abstract
The broadening first-order sea clutters caused by the signals from different directions with various radial current velocities create severe disturbance for target detection using high-frequency surface wave radar (HFSWR). Conventional sea clutter suppression methods tend to remove the sea clutter and target signals when they are mixed in the Doppler spectrum. Based on the characteristics of the target signal and sea clutter in spatial-temporal domain, a new first-order sea clutter suppression method for HFSWR using orthogonal projection is proposed. The proposed method uses the data from multichannels and slow-time domain at the adjacent range cell to construct a covariance matrix, which can be used to obtain the sea clutter subspace by eigendecomposition. Later, original signals are projected onto the sea clutter subspace. Finally, subtract the component of the original signals in the sea clutter subspace from the original signals to achieve the suppression of sea clutter by retaining the target signals. The simulation and experimental results for a single target and multiple targets cases indicate that the proposed method can suppress the first-order sea clutter effectively, which enhances the target detection capacity in the sea clutter zone for HFSWR.
- Published
- 2022
95. Spatial-Aware Collaboration–Competition Preserving Graph Embedding for Hyperspectral Image Classification
- Author
-
Chiranjibi Shah and Qian Du
- Subjects
Computer science ,business.industry ,Graph embedding ,Dimensionality reduction ,Hyperspectral imaging ,Pattern recognition ,Geotechnical Engineering and Engineering Geology ,Linear discriminant analysis ,Tikhonov regularization ,Graph (abstract data type) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Representation (mathematics) ,Subspace topology - Abstract
Recently, graph-based discriminant analysis has drawn much attention in representing a high-dimensional hyperspectral data set using a low-dimensional subspace by defining high-dimensional data structure on a graph. Obtaining optimal representation coefficients for classification purposes are the key in such methods. A closed form solution can be found to solve the problem related to collaborative representation using labeled samples, which offers computational efficiency. There exists an unsupervised approach of collaboration preserving graph embedding (CPGE) for dimensionality reduction (DR), and its performance is further enhanced by imposing locality-preserving constraint in the method called collaboration-competition preserving graph embedding (CCPGE). In this letter, we introduce spatial-aware collaboration-competitive preserving graph embedding with Tikhonov (SaCCPGT) by imposing a spatial regularization term in the objective function of CCPGE with Tikhonov regularization. In this way, spectral and spatial information can be utilized in a closed form solution in the proposed method. Experimental results on different hyperspectral data sets demonstrate the superior performance of the proposed SaCCPGT in comparison to state-of-the-art graph-based discriminant analysis approaches for DR.
- Published
- 2022
96. Vision-Based Localization in Multi-Agent Networks With Communication Constraints
- Author
-
Yuan Shen, Kai Gu, and Fengzhuo Zhang
- Subjects
Flexibility (engineering) ,Computer Networks and Communications ,Property (programming) ,Computer science ,Aerospace Engineering ,Euclidean distance matrix ,Convexity ,symbols.namesake ,Automotive Engineering ,symbols ,Electrical and Electronic Engineering ,Projection (set theory) ,Fisher information ,Algorithm ,Subspace topology ,Data transmission - Abstract
Amongst the diversified applications of Internet of Vehicles (IoV), vision-based localization has raised increasing concern for its extraordinary performance in terms of accuracy and flexibility. The ultra-reliability and low-latency requirements of IoV have posed urgent demand for the provision of precise relative network geometry and efficient data transmission. In this paper, we propose a vision-based relative localization scheme in the presence of communication constraints. First, we derive the Fisher information matrix and obtain the relative squared position error bound (SPEB) via subspace projection. Next, we determine the local convexity of the relative SPEB with respect to bit allocation vectors and propose two bit allocation algorithms. Furthermore, we exploit the vision-based relative geometry and develop two localization algorithms based on Euclidean distance matrix completion and the property of the vision-based relative geometry. The simulation results validate that the proposed vision-based localization algorithms achieve near-optimal performance in terms of the relative SPEB within the bandwidth-constrained localization network, and they also demonstrate the superiority of the cooperation among agents.
- Published
- 2022
97. A Postmatched-Filtering Image-Domain Subspace Method for Channel Mismatch Estimation of Multiple Azimuth Channels SAR
- Author
-
Zijing Zhang, Jixiang Xiang, Yong Wang, Mengdao Xing, Jun Yang, Zheng Bao, Min Bao, and Guang-Cai Sun
- Subjects
Synthetic aperture radar ,Image domain ,Azimuth ,Computer science ,Signal reconstruction ,General Earth and Planetary Sciences ,Filter (signal processing) ,Electrical and Electronic Engineering ,Algorithm ,Subspace topology ,Computer Science::Information Theory ,Communication channel - Abstract
Multiple azimuth channels (MACs) synthetic aperture radar (SAR) can theoretically achieve high azimuth resolution and wide swath (HRWS). Nevertheless, in practice, channel mismatch will lead to ghost or azimuth ambiguities, which will degrade the imaging quality. This article proposes a novel idea for estimating the channel mismatch of MACs SAR in the image domain. First, we found that the degree of freedom (DOF) of MACs signals doubles after signal reconstruction and imaging. As a result, when the channel number is not great enough, the subspace method for error estimation is unable to be implemented. To deal with this problem, we introduce a DOF compression method based on spectral filtering. This method can decrease the image-domain DOF. Finally, an image-domain subspace method is proposed to estimate the channel phase error, using the focused data and selecting the high SNR region of SAR images. The proposed method has advantages for the channel phase error estimation. Simulated space-borne MACs SAR data and real measured airborne SAR data are processed to demonstrate the effectiveness of the proposed method.
- Published
- 2022
98. A Second Order Algorithm for MCP Regularized Optimization
- Author
-
Wanyou Cheng, Ziteng Guo, and Hanlin Zhou
- Subjects
Line search ,General Computer Science ,Computer science ,General Engineering ,Regularization (mathematics) ,Stationary point ,Set (abstract data type) ,Convergence (routing) ,General Materials Science ,Point (geometry) ,Linear independence ,Electrical and Electronic Engineering ,Algorithm ,Subspace topology - Abstract
In this paper, we provide two optimal property of MCP regularization optimization. One shows that the support set of a local minimizer corresponds to linearly independent columns of A, the other provides two sufficient conditions for a stationary point to be a local minimizer point. An active set subspace second-order algorithm for MCP regularized optimization is proposed. The active sets are estimated by an identification technique that can accurately identify the zero components in a neighbourhood of a stationary point. The search direction consists of two parts: some of the components are simply defined; the other components are determined by a second-order algorithm. A nonmonotone line search strategy that guarantees global convergence is used. The numerical comparisons with several state-of-art methods demonstrate the efficiency of the proposed method.
- Published
- 2022
99. Graph Convolutional Sparse Subspace Coclustering With Nonnegative Orthogonal Factorization for Large Hyperspectral Images
- Author
-
Jianjun Liu, Liang Xiao, Jocelyn Chanussot, and Nan Huang
- Subjects
Pixel ,business.industry ,Computer science ,Hyperspectral imaging ,Pattern recognition ,Spectral clustering ,Matrix decomposition ,General Earth and Planetary Sciences ,Graph (abstract data type) ,Artificial intelligence ,Electrical and Electronic Engineering ,Representation (mathematics) ,business ,Cluster analysis ,Subspace topology - Abstract
Sparse subspace clustering (SSC) is a representative data clustering paradigm that has been broadly applied in the unsupervised classification of hyperspectral images (HSIs). Existing SSC methods usually produce a subspace affinity matrix between representations of hyperspectral pixels first, followed by spectral clustering for the affinity matrix. To this end, the separated framework fails to exploit the dualities contained in both features and pixels or higher order entities at the same time, and thus, it is difficult to compute coclusters simultaneously. In addition, SSC methods often require expensive computational consumption and memory capacity to approximate the spectral decomposition of the affinity matrix, thus hindering the applicability of SSC methods for large HSIs. To overcome these limitations, we propose a novel graph convolutional sparse subspace coclustering (GCSSC) model with nonnegative orthogonal factorization for large HSIs in which affinity matrix learning and spectral coclustering are integrated into a unified optimizing model to obtain the optimal clustering results. Specifically, to form a more compact self-representation, the superpixel-based adaptive dictionary construction strategy is proposed instead of the global dictionary to precisely represent the pixels. To explore the spatial-contextual and spectral neighboring characteristics between dictionary atoms, graph convolution is incorporated into the dictionary atoms to aggregate the local neighborhood information, and the affinity matrix in the proposed coclustering framework is constructed under a joint sparsity constrained representation model. To reduce high computational consumption and memory capacity, a nonnegative orthogonal factorization constraint is proposed to offer an alternative spectral clustering for hyperspectral pixels and dictionary atoms simultaneously. The clustering performance of the proposed method is evaluated for three classical HSIs, and the experimental results illustrate that the proposed method is memory and computationally efficient and outperforms the state-of-the-art HSI clustering methods.
- Published
- 2022
100. BSF: Block Subspace Filter for Removing Narrowband and Wideband Radio Interference Artifacts in Single-Look Complex SAR Images
- Author
-
Kun Li, Huizhang Yang, Jie Li, Yanlei Du, and Jian Yang
- Subjects
Synthetic aperture radar ,Computer science ,business.industry ,Filter (signal processing) ,Electromagnetic interference ,Narrowband ,General Earth and Planetary Sciences ,Preprocessor ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Wideband ,business ,Subspace topology ,Block (data storage) - Abstract
Radio signals emitted by various sources, such as ground radars and broadcast/communication devices, can unintentionally cause radio frequency interference (RFI) to spaceborne synthetic aperture radar (SAR), degrading SAR image qualities to various degrees. Most existing methods tackle this problem by applying specially designed preprocessing steps to RFI-polluted level-0 SAR data before SAR focusing. However, such preprocessing is not widely used in spaceborne SAR, as there exist radiometric artifacts due to various RFI sources in the level-1 single-look complex (SLC) image products in many spaceborne SAR data, e.g., Sentinel-1 open data archives. To address this problem, in this article, we first propose a generic subspace model for characterizing a variety of RFI types, which reveals a low-dimensional structure of RFI subspace. Based on the proposed model, we next design a block subspace filter (BSF) for removing RFI artifacts in SLC SAR images directly. Experiments with ERS-2, ENVISAT/ASAR, Sentinel-1, and Gaofen-3 data are presented, and quantitative assessments based on numerical simulations are provided, which demonstrates the promising performance and application potentials of the proposed method. BSF is simple yet efficient and does not require performing preprocessing on level-0 raw data, which is helpful for users to obtain clean SAR images. MATLAB/Octave code implementation of BSF is available at https://github.com/huizhangyang/BSF.
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.