19 results on '"Multi-view learning"'
Search Results
2. Self-adaptive label discovery and multi-view fusion for complementary label learning
- Author
-
Tang, Long, Yan, Pengfei, Tian, Yingjie, and Pardalos, Pano.M.
- Published
- 2025
- Full Text
- View/download PDF
3. Multi-view learning with enhanced multi-weight vector projection support vector machine
- Author
-
Yan, Xin, Wang, Shuaixing, Chen, Huina, and Zhu, Hongmiao
- Published
- 2025
- Full Text
- View/download PDF
4. Information-controlled graph convolutional network for multi-view semi-supervised classification
- Author
-
Shi, Yongquan, Pi, Yueyang, Liu, Zhanghui, Zhao, Hong, and Wang, Shiping
- Published
- 2025
- Full Text
- View/download PDF
5. Noise-robust consistency regularization for semi-supervised semantic segmentation
- Author
-
Zhang, HaiKuan, Li, Haitao, Zhang, Xiufeng, Yang, Guanyu, Li, Atao, Du, Weisheng, Xue, Shanshan, and Liu, Chi
- Published
- 2025
- Full Text
- View/download PDF
6. Revisiting multi-view learning: A perspective of implicitly heterogeneous Graph Convolutional Network.
- Author
-
Zou, Ying, Fang, Zihan, Wu, Zhihao, Zheng, Chenghui, and Wang, Shiping
- Subjects
- *
PROCESS capability , *IMPLICIT learning , *MACHINE learning - Abstract
Graph Convolutional Network (GCN) has become a hotspot in graph-based machine learning due to its powerful graph processing capability. Most of the existing GCN-based approaches are designed for single-view data. In numerous practical scenarios, data is expressed through multiple views, rather than a single view. The ability of GCN to model homogeneous graphs is indisputable, while it is insufficient in facing the heterophily property of multi-view data. In this paper, we revisit multi-view learning to propose an implicit heterogeneous graph convolutional network that efficiently captures the heterogeneity of multi-view data while exploiting the powerful feature aggregation capability of GCN. We automatically assign optimal importance to each view when constructing the meta-path graph. High-order cross-view meta-paths are explored based on the obtained graph, and a series of graph matrices are generated. Combining graph matrices with learnable global feature representation to obtain heterogeneous graph embeddings at various levels. Finally, in order to effectively utilize both local and global information, we introduce a graph-level attention mechanism at the meta-path level that allocates private information to each node individually. Extensive experimental results convincingly support the superior performance of the proposed method compared to other state-of-the-art approaches. • Explore the heterogeneity of multi-view data by introducing meta-paths. • Enable construction of meta-paths on multi-view data. • Design an attention mechanism to learn the importance of meta-path graph embeddings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Joint learning of feature and topology for multi-view graph convolutional network.
- Author
-
Chen, Yuhong, Wu, Zhihao, Chen, Zhaoliang, Dong, Mianxiong, and Wang, Shiping
- Subjects
- *
MATRIX decomposition , *TOPOLOGY , *DATA mapping - Abstract
Graph convolutional network has been extensively employed in semi-supervised classification tasks. Although some studies have attempted to leverage graph convolutional networks to explore multi-view data, they mostly consider the fusion of feature and topology individually, leading to the underutilization of the consistency and complementarity of multi-view data. In this paper, we propose an end-to-end joint fusion framework that aims to simultaneously conduct a consistent feature integration and an adaptive topology adjustment. Specifically, to capture the feature consistency, we construct a deep matrix decomposition module, which maps data from different views onto a feature space obtaining a consistent feature representation. Moreover, we design a more flexible graph convolution that allows to adaptively learn a more robust topology. A dynamic topology can greatly reduce the influence of unreliable information, which acquires a more adaptive representation. As a result, our method jointly designs an effective feature fusion module and a topology adjustment module, and lets these two modules mutually enhance each other. It takes full advantage of the consistency and complementarity to better capture the more intrinsic information. The experimental results indicate that our method surpasses state-of-the-art semi-supervised classification methods. • Propose an end-to-end framework for multi-view semi-supervised classification. • Design a multi-view auto-encoder to fuse feature by approximating the matrix decomposition. • Explore a more robust topology that fuses the adjacency matrices generated by k NN and k FN. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Safe screening rules for multi-view support vector machines.
- Author
-
Wang, Huiru, Zhu, Jiayi, and Zhang, Siyuan
- Subjects
- *
MACHINE learning , *SUPPORT vector machines , *COMPUTATIONAL complexity - Abstract
Multi-view learning aims to make use of the advantages of different views to complement each other and fully mines the potential information in the data. However, the complexity of multi-view learning algorithm is much higher than that of single view learning algorithm. Based on the optimality conditions of two classical multi-view models: SVM-2K and multi-view twin support vector machine (MvTwSVM), this paper analyzes the corresponding relationship between dual variables and samples, and derives their safe screening rules for the first time, termed as SSR-SVM-2K and SSR-MvTwSVM. It can assign or delete four groups of different dual variables in advance before solving the optimization problem, so as to greatly reduce the scale of the optimization problem and improve the solution speed. More importantly, the safe screening criterion is "safe", that is, the solution of the reduced optimization problem is the same as that of the original problem before screening. In addition, we further give a sequence screening rule to speed up the parameter optimization process, and analyze its properties, including the similarities and differences of safe screening rules between multi-view SVMs and single-view SVMs, the computational complexity, and the relationship between the parameter interval and screening rate. Numerical experiments verify the effectiveness of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Fast multiple graphs learning for multi-view clustering.
- Author
-
Jiang, Tianyu and Gao, Quanxue
- Subjects
- *
GRAPH algorithms , *CHARTS, diagrams, etc. , *ALGORITHMS , *SCALABILITY - Abstract
Graph-based multi-view clustering has become an active topic due to the efficiency in characterizing both the complex structure and relationship between multimedia data. However, existing methods have the following shortcomings: (1) They are inefficient or even fail for graph learning in large scale due to the graph construction and eigen-decomposition. (2) They cannot well exploit both the complementary information and spatial structure embedded in graphs of different views. To well exploit complementary information and tackle the scalability issue plaguing graph-based multi-view clustering, we propose an efficient multiple graph learning model via a small number of anchor points and tensor Schatten p -norm minimization. Specifically, we construct a hidden and tractable large graph by anchor graph for each view and well exploit complementary information embedded in anchor graphs of different views by tensor Schatten p -norm regularizer. Finally, we develop an efficient algorithm, which scales linearly with the data size, to solve our proposed model. Extensive experimental results on several datasets indicate that our proposed method outperforms some state-of-the-art multi-view clustering algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Attention-based stackable graph convolutional network for multi-view learning.
- Author
-
Xu, Zhiyong, Chen, Weibin, Zou, Ying, Fang, Zihan, and Wang, Shiping
- Subjects
- *
SUPERVISED learning , *PROCESS capability , *MACHINE learning , *CLASSIFICATION - Abstract
In multi-view learning, graph-based methods like Graph Convolutional Network (GCN) are extensively researched due to effective graph processing capabilities. However, most GCN-based methods often require complex preliminary operations such as sparsification, which may bring additional computation costs and training difficulties. Additionally, as the number of stacking layers increases in most GCN, over-smoothing problem arises, resulting in ineffective utilization of GCN capabilities. In this paper, we propose an attention-based stackable graph convolutional network that captures consistency across views and combines attention mechanism to exploit the powerful aggregation capability of GCN to effectively mitigate over-smoothing. Specifically, we introduce node self-attention to establish dynamic connections between nodes and generate view-specific representations. To maintain cross-view consistency, a data-driven approach is devised to assign attention weights to views, forming a common representation. Finally, based on residual connectivity, we apply an attention mechanism to the original projection features to generate layer-specific complementarity, which compensates for the information loss during graph convolution. Comprehensive experimental results demonstrate that the proposed method outperforms other state-of-the-art methods in multi-view semi-supervised tasks. • Introduces node self-attention for cohesive multi-view representations. • Features data-driven attention for cross-view consistency. • Employs residual cross-attention to mitigate over-smoothing issues. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Multi-view heterogeneous graph learning with compressed hypergraph neural networks.
- Author
-
Huang, Aiping, Fang, Zihan, Wu, Zhihao, Tan, Yanchao, Han, Peng, Wang, Shiping, and Zhang, Le
- Subjects
- *
GRAPH neural networks , *HYPERGRAPHS - Abstract
Multi-view learning is an emerging field of multi-modal fusion, which involves representing a single instance using multiple heterogeneous features to improve compatibility prediction. However, existing graph-based multi-view learning approaches are implemented on homogeneous assumptions and pairwise relationships, which may not adequately capture the complex interactions among real-world instances. In this paper, we design a compressed hypergraph neural network from the perspective of multi-view heterogeneous graph learning. This approach effectively captures rich multi-view heterogeneous semantic information, incorporating a hypergraph structure that simultaneously enables the exploration of higher-order correlations between samples in multi-view scenarios. Specifically, we introduce efficient hypergraph convolutional networks based on an explainable regularizer-centered optimization framework. Additionally, a low-rank approximation is adopted as hypergraphs to reformat the initial complex multi-view heterogeneous graph. Extensive experiments compared with several advanced node classification methods and multi-view classification methods have demonstrated the feasibility and effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Heterogeneous graph convolutional network for multi-view semi-supervised classification.
- Author
-
Wang, Shiping, Huang, Sujia, Wu, Zhihao, Liu, Rui, Chen, Yong, and Zhang, Dell
- Subjects
- *
SUPERVISED learning , *CLASSIFICATION , *MATRICES (Mathematics) , *MOTIVATION (Psychology) - Abstract
This paper proposes a novel approach to semantic representation learning from multi-view datasets, distinct from most existing methodologies which typically handle single-view data individually, maintaining a shared semantic link across the multi-view data via a unified optimization process. Notably, even recent advancements, such as Co-GCN, continue to treat each view as an independent graph, subsequently aggregating the respective GCN representations to form output representations, which ignores the complex semantic interactions among heterogeneous data. To address the issue, we design a unified framework to connect multi-view data with heterogeneous graphs. Specifically, our study envisions multi-view data as a heterogeneous graph composed of shared isomorphic nodes and multi-type edges, wherein the same nodes are shared across different views, but each specific view possesses its own unique edge type. This perspective motivates us to utilize the heterogeneous graph convolutional network (HGCN) to extract semantic representations from multi-view data for semi-supervised classification tasks. To the best of our knowledge, this is an early attempt to transfigure multi-view data into a heterogeneous graph within the realm of multi-view semi-supervised learning. In our approach, the original input of the HGCN is composed of concatenated multi-view matrices, and its convolutional operator (the graph Laplacian matrix) is adaptively learned from multi-type edges in a data-driven fashion. After rigorous experimentation on eight public datasets, our proposed method, hereafter referred to as HGCN-MVSC, demonstrated encouraging superiority over several state-of-the-art competitors for semi-supervised classification tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Multi-view Teacher–Student Network.
- Author
-
Tian, Yingjie, Sun, Shiding, and Tang, Jingjing
- Subjects
- *
CONVOLUTIONAL neural networks , *DISTILLATION - Abstract
Multi-view learning aims to fully exploit the view-consistency and view-discrepancy for performance improvement. Knowledge Distillation (KD), characterized by the so-called "Teacher–Student" (T-S) learning framework, can transfer information learned from one model to another. Inspired by knowledge distillation, we propose a Multi-view Teacher–Student Network (MTS-Net), which combines knowledge distillation and multi-view learning into a unified framework. We first redefine the teacher and student for the multi-view case. Then the MTS-Net is built by optimizing both the view classification loss and the knowledge distillation loss in an end-to-end training manner. We further extend MTS-Net to image recognition tasks and present a multi-view Teacher–Student framework with convolutional neural networks called MTSCNN. To the best of our knowledge, MTS-Net and MTSCNN bring a new insight to extend the Teacher–Student framework to tackle the multi-view learning problem. We theoretically verify the mechanism of MTS-Net and MTSCNN and comprehensive experiments demonstrate the effectiveness of the proposed methods. • MTS-Net framework exploits the knowledge distillation to realize both principles. • We extend MTS-Net to image recognition tasks and present MTSCNN. • We theoretically analyze the reason why MTS-Net and MTSCNN work. • Experiments are conducted to demonstrate the effectiveness of MTS-Net and MTSCNN. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Improved multi-view GEPSVM via Inter-View Difference Maximization and Intra-view Agreement Minimization.
- Author
-
Cheng, Yawen, Yin, Hang, Ye, Qiaolin, Huang, Peng, Fu, Liyong, Yang, Zhangjing, and Tian, Yuan
- Subjects
- *
SUPPORT vector machines , *HYPERPLANES , *MATHEMATICAL regularization - Abstract
Multiview Generalized Eigenvalue Proximal Support Vector Machine (MvGEPSVM) is an effective method for multiview data classification proposed recently. However, it ignores discriminations between different views and the agreement of the same view. Moreover, there is no robustness guarantee. In this paper, we propose an improved multiview GEPSVM (IMvGEPSVM) method, which adds a multi-view regularization that can connect different views of the same class and simultaneously considers the maximization of the samples from different classes in heterogeneous views for promoting discriminations. This makes the classification more effective. In addition, L1-norm rather than squared L2-norm is employed to calculate the distances from each of the sample points to the hyperplane so as to reduce the effect of outliers in the proposed model. To solve the resulting objective, an efficient iterative algorithm is presented. Theoretically, we conduct the proof of the algorithm's convergence. Experimental results show the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Discriminative margin-sensitive autoencoder for collective multi-view disease analysis.
- Author
-
Zhang, Zheng, Zhu, Qi, Xie, Guo-Sen, Chen, Yi, Li, Zhengming, and Wang, Shuihua
- Subjects
- *
SUPPORT vector machines , *PROTEIN structure , *PROTEIN folding , *ALZHEIMER'S disease , *INFORMATION commons - Abstract
Medical prediction is always collectively determined based on bioimages collected from different sources or various clinical characterizations described from multiple physiological features. Notably, learning intrinsic structures from multiple heterogeneous features is significant but challenging in multi-view disease understanding. Different from existing methods that separately deal with each single view, this paper proposes a discriminative Margin-Sensitive Autoencoder (MSAE) framework for automated Alzheimer's disease (AD) diagnosis and accurate protein fold recognition. Generally, our MSAE aims to collaboratively explore the complementary properties of multi-view bioimage features in a semantic-sensitive encoder–decoder paradigm, where the discriminative semantic space is explicitly constructed in a margin-scalable regression model. Specifically, we develop a semantic-sensitive autoencoder, where an encoder projects multi-view visual features into the common semantic-aware latent space, and a decoder is exerted as an additional constraint to reconstruct the respective visual features. In particular, the importance of different views is adaptively weighted by self-adjusting learning scheme, such that their underlying correlations and complementary characteristics across multiple views are simultaneously preserved into the latent common representations. Moreover, a flexible semantic space is formulated by a margin-scalable support vector machine to improve the discriminability of the learning model. Importantly, correntropy induced metric is exploited as a robust regularization measurement to better control outliers for effective classification. A half-quadratic minimization and alternating learning strategy are devised to optimize the resulting framework such that each subproblem exists a closed-form solution in each iterative minimization phase. Extensive experimental results performed on the Alzheimer's Disease Neuroimaging Initiative (ADNI) datasets show that our MSAE can achieve superior performances for both binary and multi-class classification in AD diagnosis, and evaluations on protein folds demonstrate that our method can achieve very encouraging performance on protein structure recognition, outperforming the state-of-the-art methods. • We address multi-view disease analysis with a discriminative margin-sensitive autoencoder framework, where the adaptive-weighting multi-view common representation learning and discriminative margin-sensitive semantic space construction are collaboratively considered in one unified learning model. • We propose a robust multi-view semantic autoencoder model to integrate diverse views. • Different views are adaptively assigned with the self-learned optimal weights. • The correlations across multiple views are concurrently preserved in the common space. • We construct a discriminative margin-sensitive space to calibrate regressive targets. • Extensive experiments demonstrate the superiority of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Partition level multiview subspace clustering.
- Author
-
Kang, Zhao, Zhao, Xinjia, Peng, Chong, Zhu, Hongyuan, Zhou, Joey Tianyi, Peng, Xi, Chen, Wenyu, and Xu, Zenglin
- Subjects
- *
SPACE , *ABILITY , *NOISE , *REPRODUCTION - Abstract
Multiview clustering has gained increasing attention recently due to its ability to deal with multiple sources (views) data and explore complementary information between different views. Among various methods, multiview subspace clustering methods provide encouraging performance. They mainly integrate the multiview information in the space where the data points lie. Hence, their performance may be deteriorated because of noises existing in each individual view or inconsistent between heterogeneous features. For multiview clustering, the basic premise is that there exists a shared partition among all views. Therefore, the natural space for multiview clustering should be all partitions. Orthogonal to existing methods, we propose to fuse multiview information in partition level following two intuitive assumptions: (i) each partition is a perturbation of the consensus clustering; (ii) the partition that is close to the consensus clustering should be assigned a large weight. Finally, we propose a unified multiview subspace clustering model which incorporates the graph learning from each view, the generation of basic partitions, and the fusion of consensus partition. These three components are seamlessly integrated and can be iteratively boosted by each other towards an overall optimal solution. Experiments on four benchmark datasets demonstrate the efficacy of our approach against the state-of-the-art techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Improved multi-view privileged support vector machine.
- Author
-
Tang, Jingjing, Tian, Yingjie, Liu, Xiaohui, Li, Dewei, Lv, Jia, and Kou, Gang
- Subjects
- *
SUPPORT vector machines , *LEARNING , *MATHEMATICAL models , *BIG data , *MULTIPLIERS (Mathematical analysis) - Abstract
Abstract Multi-view learning (MVL) concentrates on the problem of learning from the data represented by multiple distinct feature sets. The consensus and complementarity principles play key roles in multi-view modeling. By exploiting the consensus principle or the complementarity principle among different views, various successful support vector machine (SVM)-based multi-view learning models have been proposed for performance improvement. Recently, a framework of learning using privileged information (LUPI) has been proposed to model data with complementary information. By bridging connections between the LUPI paradigm and multi-view learning, we have presented a privileged SVM-based two-view classification model, named PSVM-2V, satisfying both principles simultaneously. However, it can be further improved in these three aspects: (1) fully unleash the power of the complementary information among different views; (2) extend to multi-view case; (3) construct a more efficient optimization solver. Therefore, in this paper, we propose an improved privileged SVM-based model for multi-view learning, termed as IPSVM-MV. It directly follows the standard LUPI model to fully utilize the multi-view complementary information; also it is a general model for multi-view scenario, and an alternating direction method of multipliers (ADMM) is employed to solve the corresponding optimization problem efficiently. Further more, we theoretically analyze the performance of IPSVM-MV from the viewpoints of the consensus principle and the generalization error bound. Experimental results on 75 binary data sets demonstrate the effectiveness of the proposed method; here we mainly concentrate on two-view case to compare with state-of-the-art methods. Highlights • IPSVM-MV serves as a general model for multi-view scenario. • We employ alternating direction method of multipliers to solve IPSVM-MV efficiently. • We theoretically analyze the performance of IPSVM-MV from two aspects. • Experimental results demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. Multi-view L2-SVM and its multi-view core vector machine.
- Author
-
Huang, Chengquan, Chung, Fu-lai, and Wang, Shitong
- Subjects
- *
SUPPORT vector machines , *PARAMETER estimation , *TIME-varying systems , *STATISTICAL sampling , *MACHINE learning - Abstract
In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ -SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
19. Semi-supervised learning for tree-structured ensembles of RBF networks with Co-Training
- Author
-
Abdel Hady, Mohamed Farouk, Schwenker, Friedhelm, and Palm, Günther
- Subjects
- *
MACHINE learning , *TREE graphs , *DATA mining , *VISUAL perception , *SET theory , *DATA analysis , *DATA reduction , *PATTERN perception , *MATHEMATICAL decomposition - Abstract
Abstract: Supervised learning requires a large amount of labeled data, but the data labeling process can be expensive and time consuming, as it requires the efforts of human experts. Co-Training is a semi-supervised learning method that can reduce the amount of required labeled data through exploiting the available unlabeled data to improve the classification accuracy. It is assumed that the patterns are represented by two or more redundantly sufficient feature sets (views) and these views are independent given the class. On the other hand, most of the real-world pattern recognition tasks involve a large number of categories which may make the task difficult. The tree-structured approach is an output space decomposition method where a complex multi-class problem is decomposed into a set of binary sub-problems. In this paper, we propose two learning architectures to combine the merits of the tree-structured approach and Co-Training. We show that our architectures are especially useful for classification tasks that involve a large number of classes and a small amount of labeled data where the single-view tree-structured approach does not perform well alone but when combined with Co-Training, it can exploit effectively the independent views and the unlabeled data to improve the recognition accuracy. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.