261 results on '"graph contrastive learning"'
Search Results
2. GMNI: Achieve good data augmentation in unsupervised graph contrastive learning
- Author
-
Xiong, Xin, Wang, Xiangyu, Yang, Suorong, Shen, Furao, and Zhao, Jian
- Published
- 2025
- Full Text
- View/download PDF
3. Understanding and mitigating dimensional collapse of Graph Contrastive Learning: A non-maximum removal approach
- Author
-
Sun, Jiawei, Chen, Ruoxin, Li, Jie, Ding, Yue, Wu, Chentao, Liu, Zhi, and Yan, Junchi
- Published
- 2025
- Full Text
- View/download PDF
4. Spatial domains identification in spatial transcriptomics using modality-aware and subspace-enhanced graph contrastive learning
- Author
-
Gui, Yang, Li, Chao, and Xu, Yan
- Published
- 2024
- Full Text
- View/download PDF
5. Auto-focus tracing: Image manipulation detection with artifact graph contrastive
- Author
-
Pan, Wenyan, Xia, Zhihua, Ma, Wentao, Wang, Yuwei, Gu, Lichuan, Shi, Guolong, and Zhao, Shan
- Published
- 2024
- Full Text
- View/download PDF
6. A global contextual enhanced structural-aware transformer for sequential recommendation
- Author
-
Zhang, Zhu, Yang, Bo, Chen, Xingming, and Li, Qing
- Published
- 2024
- Full Text
- View/download PDF
7. Dynamic heterogeneous graph representation via contrastive learning based on multi-prior tasks
- Author
-
Bai, Wenhao, Qiu, Liqing, and Zhao, Weidong
- Published
- 2025
- Full Text
- View/download PDF
8. Robust graph representation learning with asymmetric debiased contrasts
- Author
-
Li, Wen, Ng, Wing W.Y., Wang, Hengyou, Zhang, Jianjun, and Zhong, Cankun
- Published
- 2025
- Full Text
- View/download PDF
9. Graph contrastive learning with multiple information fusion
- Author
-
Wang, Xiaobao, Yang, Jun, Wang, Zhiqiang, He, Dongxiao, Zhao, Jitao, Huang, Yuxiao, and Jin, Di
- Published
- 2025
- Full Text
- View/download PDF
10. Contrastive multi-graph learning with neighbor hierarchical sifting for semi-supervised text classification
- Author
-
Ai, Wei, Li, Jianbin, Wang, Ze, Wei, Yingying, Meng, Tao, and Li, Keqin
- Published
- 2025
- Full Text
- View/download PDF
11. Unraveling and Mitigating Endogenous Task-oriented Spurious Correlations in Ego-graphs via Automated Counterfactual Contrastive Learning
- Author
-
Lin, Tianqianjin, Kang, Yangyang, Jiang, Zhuoren, Song, Kaisong, Kuang, Kun, Sun, Changlong, Huang, Cui, and Liu, Xiaozhong
- Published
- 2025
- Full Text
- View/download PDF
12. BGCSL: An unsupervised framework reveals the underlying structure of large-scale whole-brain functional connectivity networks
- Author
-
Zhang, Hua, Zeng, Weiming, Li, Ying, Deng, Jin, and Wei, Boyang
- Published
- 2025
- Full Text
- View/download PDF
13. From overfitting to robustness: Quantity, quality, and variety oriented negative sample selection in graph contrastive learning
- Author
-
Ali, Adnan, Li, Jinlong, Chen, Huanhuan, and Bashir, Ali Kashif
- Published
- 2025
- Full Text
- View/download PDF
14. Multi-scale graph harmonies: Unleashing U-Net’s potential for medical image segmentation through contrastive learning
- Author
-
Wu, Jie, Ma, Jiquan, Xi, Heran, Li, Jinbao, and Zhu, Jinghua
- Published
- 2025
- Full Text
- View/download PDF
15. Graph Contrastive Learning for Multi-behavior Recommendation
- Author
-
Li, Haiying, Wang, Huihui, Meng, Shunmei, Chen, Xingguo, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sheng, Quan Z., editor, Dobbie, Gill, editor, Jiang, Jing, editor, Zhang, Xuyun, editor, Zhang, Wei Emma, editor, Manolopoulos, Yannis, editor, Wu, Jia, editor, Mansoor, Wathiq, editor, and Ma, Congbo, editor
- Published
- 2025
- Full Text
- View/download PDF
16. CSA4Rec: Collaborative Signals Augmentation Model Based on GCN for Recommendation
- Author
-
Liu, Haibo, Yu, Lianjie, Si, Yali, Liu, Jinglian, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Barhamgi, Mahmoud, editor, Wang, Hua, editor, and Wang, Xin, editor
- Published
- 2025
- Full Text
- View/download PDF
17. Invariant Risk Minimization Augmentation for Graph Contrastive Learning
- Author
-
Qin, Peng, Chen, Weifu, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lin, Zhouchen, editor, Cheng, Ming-Ming, editor, He, Ran, editor, Ubul, Kurban, editor, Silamu, Wushouer, editor, Zha, Hongbin, editor, Zhou, Jie, editor, and Liu, Cheng-Lin, editor
- Published
- 2025
- Full Text
- View/download PDF
18. Characterizing the Histology Spatial Intersections Between Tumor-Infiltrating Lymphocytes and Tumors for Survival Prediction of Cancers Via Graph Contrastive Learning
- Author
-
Shi, Yangyang, Zhu, Qi, Zuo, Yingli, Wan, Peng, Zhang, Daoqiang, Shao, Wei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Xu, Xuanang, editor, Cui, Zhiming, editor, Rekik, Islem, editor, Ouyang, Xi, editor, and Sun, Kaicong, editor
- Published
- 2025
- Full Text
- View/download PDF
19. Predicting noncoding RNA and disease associations using multigraph contrastive learning.
- Author
-
Sun, Si-Lin, Jiang, Yue-Yi, Yang, Jun-Ping, Xiu, Yu-Han, Bilal, Anas, and Long, Hai-Xia
- Subjects
- *
NON-coding RNA , *ARTIFICIAL intelligence , *K-means clustering , *ALZHEIMER'S disease , *MICRORNA - Abstract
MiRNAs and lncRNAs are two essential noncoding RNAs. Predicting associations between noncoding RNAs and diseases can significantly improve the accuracy of early diagnosis.With the continuous breakthroughs in artificial intelligence, researchers increasingly use deep learning methods to predict associations. Nevertheless, most existing methods face two major issues: low prediction accuracy and the limitation of only being able to predict a single type of noncoding RNA-disease association. To address these challenges, this paper proposes a method called K-Means and multigraph Contrastive Learning for predicting associations among miRNAs, lncRNAs, and diseases (K-MGCMLD). The K-MGCMLD model is divided into four main steps. The first step is the construction of a heterogeneous graph. The second step involves down sampling using the K-means clustering algorithm to balance the positive and negative samples. The third step is to use an encoder with a Graph Convolutional Network (GCN) architecture to extract embedding vectors. Multigraph contrastive learning, including both local and global graph contrastive learning, is used to help the embedding vectors better capture the latent topological features of the graph. The fourth step involves feature reconstruction using the balanced positive and negative samples and the embedding vectors fed into an XGBoost classifier for multi-association classification prediction. Experimental results have shown that AUC value for miRNA-disease association is 0.9542, lncRNA-disease association is 0.9603, and lncRNA-miRNA association is 0.9687. Additionally, this study has conducted case analyses using K-MGCMLD, which has validated the associations of all the top 30 miRNAs predicted to be associated with lung cancer and Alzheimer's diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
20. GPS: graph contrastive learning via multi-scale augmented views from adversarial pooling.
- Author
-
Ju, Wei, Gu, Yiyang, Mao, Zhengyang, Qiao, Ziyue, Qin, Yifang, Luo, Xiao, Xiong, Hui, and Zhang, Ming
- Abstract
Self-supervised graph representation learning has recently shown considerable promise in a range of fields, including bioinformatics and social networks. A large number of graph contrastive learning approaches have shown promising performance for representation learning on graphs, which train models by maximizing agreement between original graphs and their augmented views (i.e., positive views). Unfortunately, these methods usually involve pre-defined augmentation strategies based on the knowledge of human experts. Moreover, these strategies may fail to generate challenging positive views to provide sufficient supervision signals. In this paper, we present a novel approach named graph pooling contrast (GPS) to address these issues. Motivated by the fact that graph pooling can adaptively coarsen the graph with the removal of redundancy, we rethink graph pooling and leverage it to automatically generate multi-scale positive views with varying emphasis on providing challenging positives and preserving semantics, i.e., strongly-augmented view and weakly-augmented view. Then, we incorporate both views into a joint contrastive learning framework with similarity learning and consistency learning, where our pooling module is adversarially trained with respect to the encoder for adversarial robustness. Experiments on twelve datasets on both graph classification and transfer learning tasks verify the superiority of the proposed method over its counterparts. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
21. Spoken language understanding via graph contrastive learning on the context-aware graph convolutional network.
- Author
-
Cao, Ze and Liu, Jian-Wei
- Abstract
A spoken language understanding system is a crucial component of a dialogue system whose task is to comprehend the user’s verbal expressions and perform the corresponding tasks accordingly. Contextual spoken language understanding (contextual SLU) is an extremely critical issue on this field as it helps the system to understand the user’s verbal expressions more accurately, thus improving the system’s performance and accuracy. The aim of this paper is to enhance the effectiveness of contextual SLU analysis. Context-based language unit systems are mainly concerned with effectively integrating dialog context information. Current approaches usually use the same contextual information to guide the slot filling of all tokens, which may introduce irrelevant information and lead to comprehension bias and ambiguity. To solve this problem, we apply the principle of graph contrastive learning based on the graph convolutional network to enhance the model’s ability to aggregate contextual information. Simultaneously, applying graph contrastive learning can enhance the model’s effectiveness by strengthening its intention. More precisely, graph convolutional networks can consider contextual information and automatically aggregate contextual information, allowing the model to no longer rely on traditionally designed heuristic aggregation functions. The contrastive learning module utilizes the principle of contrastive learning to achieve the effect of intention enhancement, which can learn deeper semantic information and contextual relationships, and improve the model's effectiveness in three key tasks: slot filling, dialogue action recognition, and intention detection. Experiments on a synthetic dialogue dataset show that our model achieves state-of-the-art performance and significantly outperforms other previous approaches (Slot F1 values + 1.03% on Sim-M, + 2.32% on Sim-R; Act F1 values + 0.26% on Sim-M, + 0.56% on Sim-R; Frame Acc values + 3.15% on Sim-M, + 1.62% on Sim-R). The code is available at: . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. DAPNet: multi-view graph contrastive network incorporating disease clinical and molecular associations for disease progression prediction.
- Author
-
Tian, Haoyu, He, Xiong, Yang, Kuo, Dai, Xinyu, Liu, Yiming, Zhang, Fengjin, Shu, Zixin, Zheng, Qiguang, Wang, Shihua, Xia, Jianan, Wen, Tiancai, Liu, Baoyan, Yu, Jian, and Zhou, Xuezhong
- Subjects
- *
KNOWLEDGE graphs , *DISEASE duration , *MOLECULAR association , *DISEASE progression , *DEEP learning - Abstract
Background: Timely and accurate prediction of disease progress is crucial for facilitating early intervention and treatment for various chronic diseases. However, due to the complicated and longitudinal nature of disease progression, the capacity and completeness of clinical data required for training deep learning models remains a significant challenge. This study aims to explore a new method that reduces data dependency and achieves predictive performance comparable to existing research. Methods: This study proposed DAPNet, a deep learning-based disease progression prediction model that solely utilizes the comorbidity duration (without relying on multi-modal data or comprehensive medical records) and disease associations from biomedical knowledge graphs to deliver high-performance prediction. DAPNet is the first to apply multi-view graph contrastive learning to disease progression prediction tasks. Compared with other studies on comorbidities, DAPNet innovatively integrates molecular-level disease association information, combines disease co-occurrence and ICD10, and fully explores the associations between diseases; Results: This study validated DAPNet using a de-identified clinical dataset derived from medical claims, which includes 2,714 patients and 10,856 visits. Meanwhile, a kidney dataset (606 patients) based on MIMIC-IV has also been constructed to fully validate its performance. The results showed that DAPNet achieved state-of-the-art performance on the severe pneumonia dataset (F1=0.84, with an improvement of 8.7%), and outperformed the six baseline models on the kidney disease dataset (F1=0.80, with an improvement of 21.3%). Through case analysis, we elucidated the clinical and molecular associations identified by the DAPNet model, which facilitated a better understanding and explanation of potential disease association, thereby providing interpretability for the model. Conclusions: The proposed DAPNet, for the first time, utilizes comorbidity duration and disease associations network, enabling more accurate disease progression prediction based on a multi-view graph contrastive learning, which provides valuable insights for early diagnosis and treatment of patients. Based on disease association networks, our research has enhanced the interpretability of disease progression predictions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. A novel multi-view contrastive learning for herb recommendation.
- Author
-
Yang, Qiyuan, Cheng, Zhongtian, Kang, Yan, and Wang, Xinchao
- Subjects
GRAPH neural networks ,CHINESE medicine ,DATA augmentation ,STATISTICAL correlation ,DATA distribution - Abstract
Herb recommendation plays a crucial role in Traditional Chinese Medicine (TCM) by prescribing therapeutic herbs given symptoms. Current herb recommendation methods make promising progress by utilizing graph neural network, but most fail to capture the underlying distribution of the prescription, particularly the long-tailed distribution, and then suffer from cold-start and data sparsity problems such as emerging epidemics or rare diseases. To effectively alleviate these problems, we first propose a novel multi-view contrastive learning method to improve the prediction performance as contrastive learning can derive self-supervision signals from raw data. For exploiting structural and semantic relationship of symptoms and herbs, we construct the symptom-herb graph from inter-view, and the symptom-symptom interactions and the herb-herb interactions from intra-view. From inter-view, we present a new dual structural contrastive learning that adds and drops data depending on the frequency of the prescription dataset to exploit unbalanced data distribution rather than traditional data augmentation. From intra-view, we propose multi-level semantic contrastive learning depending on the co-occurrence frequencies of symptoms and herbs respectively for utilizing various correlations based on statistical results. Experiments conducted on real-world datasets demonstrate the superiority of the proposed method, which improves performance, and robustness against data sparsity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. GGN-GO: geometric graph networks for predicting protein function by multi-scale structure features.
- Author
-
Mi, Jia, Wang, Han, Li, Jing, Sun, Jinghong, Li, Chang, Wan, Jing, Zeng, Yuan, and Gao, Jingyang
- Subjects
- *
CYTOSKELETAL proteins , *REPRESENTATIONS of graphs , *AMINO acid sequence , *PROTEIN structure , *FEATURE extraction - Abstract
Recent advances in high-throughput sequencing have led to an explosion of genomic and transcriptomic data, offering a wealth of protein sequence information. However, the functions of most proteins remain unannotated. Traditional experimental methods for annotation of protein functions are costly and time-consuming. Current deep learning methods typically rely on Graph Convolutional Networks to propagate features between protein residues. However, these methods fail to capture fine atomic-level geometric structural features and cannot directly compute or propagate structural features (such as distances, directions, and angles) when transmitting features, often simplifying them to scalars. Additionally, difficulties in capturing long-range dependencies limit the model's ability to identify key nodes (residues). To address these challenges, we propose a geometric graph network (GGN-GO) for predicting protein function that enriches feature extraction by capturing multi-scale geometric structural features at the atomic and residue levels. We use a geometric vector perceptron to convert these features into vector representations and aggregate them with node features for better understanding and propagation in the network. Moreover, we introduce a graph attention pooling layer captures key node information by adaptively aggregating local functional motifs, while contrastive learning enhances graph representation discriminability through random noise and different views. The experimental results show that GGN-GO outperforms six comparative methods in tasks with the most labels for both experimentally validated and predicted protein structures. Furthermore, GGN-GO identifies functional residues corresponding to those experimentally confirmed, showcasing its interpretability and the ability to pinpoint key protein regions. The code and data are available at: https://github.com/MiJia-ID/GGN-GO [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Graph contrastive learning as a versatile foundation for advanced scRNA-seq data analysis.
- Author
-
Zhang, Zhenhao, Liu, Yuxi, Xiao, Meichen, Wang, Kun, Huang, Yu, Bian, Jiang, Yang, Ruolin, and Li, Fuyi
- Subjects
- *
GRAPH neural networks , *MACHINE learning , *GENE expression , *RNA sequencing , *SOURCE code , *DEEP learning - Abstract
Single-cell RNA sequencing (scRNA-seq) offers unprecedented insights into transcriptome-wide gene expression at the single-cell level. Cell clustering has been long established in the analysis of scRNA-seq data to identify the groups of cells with similar expression profiles. However, cell clustering is technically challenging, as raw scRNA-seq data have various analytical issues, including high dimensionality and dropout values. Existing research has developed deep learning models, such as graph machine learning models and contrastive learning-based models, for cell clustering using scRNA-seq data and has summarized the unsupervised learning of cell clustering into a human-interpretable format. While advances in cell clustering have been profound, we are no closer to finding a simple yet effective framework for learning high-quality representations necessary for robust clustering. In this study, we propose scSimGCL, a novel framework based on the graph contrastive learning paradigm for self-supervised pretraining of graph neural networks. This framework facilitates the generation of high-quality representations crucial for cell clustering. Our scSimGCL incorporates cell-cell graph structure and contrastive learning to enhance the performance of cell clustering. Extensive experimental results on simulated and real scRNA-seq datasets suggest the superiority of the proposed scSimGCL. Moreover, clustering assignment analysis confirms the general applicability of scSimGCL, including state-of-the-art clustering algorithms. Further, ablation study and hyperparameter analysis suggest the efficacy of our network architecture with the robustness of decisions in the self-supervised learning setting. The proposed scSimGCL can serve as a robust framework for practitioners developing tools for cell clustering. The source code of scSimGCL is publicly available at https://github.com/zhangzh1328/scSimGCL. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. XsimGCL's cross-layer for group recommendation using extremely simple graph contrastive learning.
- Author
-
Liu, Tengjiao
- Subjects
- *
SUPERVISED learning , *UNIFORMITY , *BIPARTITE graphs - Abstract
Group recommendation involves suggesting items or activities to a group of users based on their collective preferences or characteristics. Graph contrastive learning is a technique used to learn representations of items and users in a graph structure. Although contrastive learning-based recommendation techniques reduce the data sparsity problem by extracting general features from raw data and also make the representation of user-item bipartite graph augmentations more consistent, the factors contributing to improving the performance of this technique are still not fully understood. Meanwhile, graph augmentations have little importance in contrastive learning-based recommendation and are relatively unreliable. The eXtremely Simple Graph Contrastive Learning (XSimGCL) provides novel insights into the effect of contrastive learning on recommendation, where views for contrastive learning are created through a simple yet effective noise-based embedding augmentation. Although XSimGCL infers the final group decision by dynamically aggregating the preferences of group members and includes various types of interaction, the performance of supervised learning is reduced due to the data sparsity problem, and as a result, the efficiency of group preference representation is limited. To address this challenge, we developed a Group Recommendation model based on XsimGCL in this study (GR-GCL). GR-GCL is inspired by the Light Graph Convolution Network (LightGCN) to realize simultaneous learning of multiple graphs, where initial embedding is considered the only update parameter. Also, GR-GCL improves group recommendation by applying cross-layer contrastive learning in the XSimGCL model by representing more diverse entities. The rationality analysis of our proposed GR-GCL has been performed on several datasets from both analytical and empirical perspectives. Although our model is very simple, it performs better in group recommendations by adjusting the uniformity of representations learned from counterparts based on contrastive learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Position-Aware and Subgraph Enhanced Dynamic Graph Contrastive Learning on Discrete-Time Dynamic Graph.
- Author
-
Feng, Jian, Liu, Tian, and Du, Cailing
- Subjects
GRAPH neural networks ,REPRESENTATIONS of graphs ,SUPERVISED learning ,DATA augmentation ,RANDOM walks - Abstract
Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph representation learning to eliminate the dependence of labels. However, existing studies neglect positional information when learning discrete snapshots, resulting in insufficient network topology learning. At the same time, due to the lack of appropriate data augmentation methods, it is difficult to capture the evolving patterns of the network effectively. To address the above problems, a position-aware and subgraph enhanced dynamic graph contrastive learning method is proposed for discrete-time dynamic graphs. Firstly, the global snapshot is built based on the historical snapshots to express the stable pattern of the dynamic graph, and the random walk is used to obtain the position representation by learning the positional information of the nodes. Secondly, a new data augmentation method is carried out from the perspectives of short-term changes and long-term stable structures of dynamic graphs. Specifically, subgraph sampling based on snapshots and global snapshots is used to obtain two structural augmentation views, and node structures and evolving patterns are learned by combining graph neural network, gated recurrent unit, and attention mechanism. Finally, the quality of node representation is improved by combining the contrastive learning between different structural augmentation views and between the two representations of structure and position. Experimental results on four real datasets show that the performance of the proposed method is better than the existing unsupervised methods, and it is more competitive than the supervised learning method under a semi-supervised setting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Predicting noncoding RNA and disease associations using multigraph contrastive learning
- Author
-
Si-Lin Sun, Yue-Yi Jiang, Jun-Ping Yang, Yu-Han Xiu, Anas Bilal, and Hai-Xia Long
- Subjects
MiRNAs ,lncRNAs ,Diseases ,Heterogeneous graph ,Graph contrastive learning ,Multi-association prediction ,Medicine ,Science - Abstract
Abstract MiRNAs and lncRNAs are two essential noncoding RNAs. Predicting associations between noncoding RNAs and diseases can significantly improve the accuracy of early diagnosis.With the continuous breakthroughs in artificial intelligence, researchers increasingly use deep learning methods to predict associations. Nevertheless, most existing methods face two major issues: low prediction accuracy and the limitation of only being able to predict a single type of noncoding RNA-disease association. To address these challenges, this paper proposes a method called K-Means and multigraph Contrastive Learning for predicting associations among miRNAs, lncRNAs, and diseases (K-MGCMLD). The K-MGCMLD model is divided into four main steps. The first step is the construction of a heterogeneous graph. The second step involves down sampling using the K-means clustering algorithm to balance the positive and negative samples. The third step is to use an encoder with a Graph Convolutional Network (GCN) architecture to extract embedding vectors. Multigraph contrastive learning, including both local and global graph contrastive learning, is used to help the embedding vectors better capture the latent topological features of the graph. The fourth step involves feature reconstruction using the balanced positive and negative samples and the embedding vectors fed into an XGBoost classifier for multi-association classification prediction. Experimental results have shown that AUC value for miRNA-disease association is 0.9542, lncRNA-disease association is 0.9603, and lncRNA-miRNA association is 0.9687. Additionally, this study has conducted case analyses using K-MGCMLD, which has validated the associations of all the top 30 miRNAs predicted to be associated with lung cancer and Alzheimer’s diseases.
- Published
- 2025
- Full Text
- View/download PDF
29. Graph Contrastive Pre-training for Anti-money Laundering
- Author
-
Hanbin Lu and Haosen Wang
- Subjects
Anti-money laundering ,Graph contrastive learning ,Graph neural network ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Anti-money laundering (AML) is vital to maintaining financial markets, social stability, and political authority. At present, many studies model the AML task as the graph and leverage graph neural network (GNN) for node/edge classification. Although these studies have achieved some achievements, they struggle with the issue of label scarcity in real-world scenarios. In this paper, we propose a graph contrastive pre-training framework for anti-money laundering (GCPAL), which mines supervised signals from the label-free transaction network to significantly reduce the dependence on annotations. Specifically, we construct three augmented views (i.e., two stochastic perturbed views and a KNN view). Perturbed views are beneficial to the model learning invariant information and improve the robustness against noise. KNN view provides implicit interactions to mitigate the link sparsity in the transaction network. Moreover, we extend the positive sample set using connected neighbors and node pairs with similar features to further enhance the expressiveness of the model. We evaluate the GCPAL on two datasets, and the extensive experimental results demonstrate that the GCPAL is consistently superior to other SOTA baselines, especially with scarce labels.
- Published
- 2024
- Full Text
- View/download PDF
30. DAPNet: multi-view graph contrastive network incorporating disease clinical and molecular associations for disease progression prediction
- Author
-
Haoyu Tian, Xiong He, Kuo Yang, Xinyu Dai, Yiming Liu, Fengjin Zhang, Zixin Shu, Qiguang Zheng, Shihua Wang, Jianan Xia, Tiancai Wen, Baoyan Liu, Jian Yu, and Xuezhong Zhou
- Subjects
Disease progression prediction ,Disease association networks ,Graph contrastive learning ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Background Timely and accurate prediction of disease progress is crucial for facilitating early intervention and treatment for various chronic diseases. However, due to the complicated and longitudinal nature of disease progression, the capacity and completeness of clinical data required for training deep learning models remains a significant challenge. This study aims to explore a new method that reduces data dependency and achieves predictive performance comparable to existing research. Methods This study proposed DAPNet, a deep learning-based disease progression prediction model that solely utilizes the comorbidity duration (without relying on multi-modal data or comprehensive medical records) and disease associations from biomedical knowledge graphs to deliver high-performance prediction. DAPNet is the first to apply multi-view graph contrastive learning to disease progression prediction tasks. Compared with other studies on comorbidities, DAPNet innovatively integrates molecular-level disease association information, combines disease co-occurrence and ICD10, and fully explores the associations between diseases; Results This study validated DAPNet using a de-identified clinical dataset derived from medical claims, which includes 2,714 patients and 10,856 visits. Meanwhile, a kidney dataset (606 patients) based on MIMIC-IV has also been constructed to fully validate its performance. The results showed that DAPNet achieved state-of-the-art performance on the severe pneumonia dataset (F1=0.84, with an improvement of 8.7%), and outperformed the six baseline models on the kidney disease dataset (F1=0.80, with an improvement of 21.3%). Through case analysis, we elucidated the clinical and molecular associations identified by the DAPNet model, which facilitated a better understanding and explanation of potential disease association, thereby providing interpretability for the model. Conclusions The proposed DAPNet, for the first time, utilizes comorbidity duration and disease associations network, enabling more accurate disease progression prediction based on a multi-view graph contrastive learning, which provides valuable insights for early diagnosis and treatment of patients. Based on disease association networks, our research has enhanced the interpretability of disease progression predictions.
- Published
- 2024
- Full Text
- View/download PDF
31. Harnessing Unsupervised Insights: Enhancing Black-Box Graph Injection Attacks with Graph Contrastive Learning.
- Author
-
Liu, Xiao, Huang, Junjie, Chen, Zihan, Pan, Yi, Xiong, Maoyi, and Zhao, Wentao
- Subjects
GRAPH neural networks ,KNOWLEDGE graphs - Abstract
Adversarial attacks on Graph Neural Networks (GNNs) have emerged as a significant threat to the security of graph learning. Compared with Graph Modification Attacks (GMAs), Graph Injection Attacks (GIAs) are considered more realistic attacks, in which attackers perturb GNN models by injecting a small number of fake nodes. However, most existing black-box GIA methods either require comprehensive knowledge of the dataset and the ground-truth labels or a large number of queries to execute the attack, which is often unfeasible in many scenarios. In this paper, we propose an unsupervised method for leveraging the rich knowledge contained in the graph data themselves to enhance the success rate of graph injection attacks on the initial query. Specifically, we introduce GraphContrastive Learning-based Graph Injection Attack (GCIA), which consists of a node encoder, a reward predictor, and a fake node generator. The Graph Contrastive Learning (GCL)-based node encoder transforms nodes for low-dimensional continuous embedding, the reward predictor acts as a simplified surrogate for the target model, and the fake node generator produces fake nodes and edges based on several carefully designed loss functions, utilizing the node encoder and reward predictor. Extensive results demonstrate that the proposed GCIA method achieves a first query success rate of 91.2% on the Reddit dataset and improves the success rate to over 99.7% after 10 queries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Heterogeneous Graph Contrastive Learning with Attention Mechanism for Recommendation.
- Author
-
Ruxing Li, Dan Yang, and Xi Gong
- Subjects
- *
GRAPH neural networks , *MACHINE learning , *REPRESENTATIONS of graphs , *INFORMATION networks , *KNOWLEDGE transfer - Abstract
Existing recommendation algorithms based on heterogeneous graphs often face performance limitations due to the sparsity and nonlinearity of the heterogeneous graph structure and semantic information, which hinders the full exploitation of the association information between users and items. In order to tackle these challenges and improve the quality of user and item feature representations, a Heterogeneous Graph Contrastive Learning Recommendation algorithm based on Attention Mechanism (HAMRec) has been proposed. To enhance the robustness of graph representations, this algorithm introduces an unsupervised contrastive learning approach and utilizes attention mechanisms on top of graph neural networks to extract both local and global information from different heterogeneous graphs. Considering the varying impact of heterogeneous auxiliary information on recommendation results in real-life scenarios, HAMRec employs personalized knowledge transfer to enhance self-supervised learning. Through a large number of experiments, it has been proven that HAMRec surpasses existing baseline models in recommendation tasks, proving its effectiveness and superiority. [ABSTRACT FROM AUTHOR]
- Published
- 2024
33. Joint data augmentations for automated graph contrastive learning and forecasting.
- Author
-
Liu, Jiaqi, Chen, Yifu, Ren, Qianqian, and Gao, Yang
- Subjects
DATA augmentation ,FORECASTING ,GENERALIZATION - Abstract
Graph augmentation plays a crucial role in graph contrastive learning. However, existing methods primarily optimize augmentations specific to particular datasets, which limits their robustness and generalization capabilities. To overcome these limitations, many studies have explored automated graph data augmentations. However, these approaches face challenges due to weak labels and data incompleteness. To tackle these challenges, we propose an innovative framework called Joint Data Augmentations for Automated Graph Contrastive Learning (JDAGCL). The proposed model first integrates two augmenters: a feature-level augmenter and an edge-level augmenter. The two augmenters learn whether to drop an edge or node to obtain optimized graph structures and enrich the information available for modeling and forecasting task. Moreover, we introduce two stage training strategy to further process the features extracted by the encoder and enhance their effectiveness for forecasting downstream task. The experimental results demonstrate that our proposed model JDAGCL achieves state-of-the-art performance compared to the latest baseline methods, with an average improvement of 14% in forecasting accuracy across multiple benchmark datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A Recommendation System for Trigger–Action Programming Rules via Graph Contrastive Learning.
- Author
-
Kuang, Zhejun, Xiong, Xingbo, Wu, Gang, Wang, Feng, Zhao, Jian, and Sun, Dawen
- Subjects
- *
RECOMMENDER systems , *INTERNET of things , *DISCOVERY (Law) , *FUNCTION spaces , *SMART homes - Abstract
Trigger–action programming (TAP) enables users to automate Internet of Things (IoT) devices by creating rules such as "IF Device1.TriggerState is triggered, THEN Device2.ActionState is executed". As the number of IoT devices grows, the combination space between the functions provided by devices expands, making manual rule creation time-consuming for end-users. Existing TAP recommendation systems enhance the efficiency of rule discovery but face two primary issues: they ignore the association of rules between users and fail to model collaborative information among users. To address these issues, this article proposes a graph contrastive learning-based recommendation system for TAP rules, named GCL4TAP. In GCL4TAP, we first devise a data partitioning method called DATA2DIV, which establishes cross-user rule relationships and is represented by a user–rule bipartite graph. Then, we design a user–user graph to model the similarities among users based on the categories and quantities of devices that they own. Finally, these graphs are converted into low-dimensional vector representations of users and rules using graph contrastive learning techniques. Extensive experiments conducted on a real-world smart home dataset demonstrate the superior performance of GCL4TAP compared to other state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Global-local aware Heterogeneous Graph Contrastive Learning for multifaceted association prediction in miRNA–gene–disease networks.
- Author
-
Si, Yuxuan, Huang, Zihan, Fang, Zhengqing, Yuan, Zhouhang, Huang, Zhengxing, Li, Yingming, Wei, Ying, Wu, Fei, and Yao, Yu-Feng
- Subjects
- *
REPRESENTATIONS of graphs , *SOURCE code , *GLOBAL method of teaching , *MICRORNA , *DIAGNOSIS - Abstract
Unraveling the intricate network of associations among microRNAs (miRNAs), genes, and diseases is pivotal for deciphering molecular mechanisms, refining disease diagnosis, and crafting targeted therapies. Computational strategies, leveraging link prediction within biological graphs, present a cost-efficient alternative to high-cost empirical assays. However, while plenty of methods excel at predicting specific associations, such as miRNA–disease associations (MDAs), miRNA–target interactions (MTIs), and disease–gene associations (DGAs), a holistic approach harnessing diverse data sources for multifaceted association prediction remains largely unexplored. The limited availability of high-quality data, as vitro experiments to comprehensively confirm associations are often expensive and time-consuming, results in a sparse and noisy heterogeneous graph, hindering an accurate prediction of these complex associations. To address this challenge, we propose a novel framework called Global-local aware Heterogeneous Graph Contrastive Learning (GlaHGCL). GlaHGCL combines global and local contrastive learning to improve node embeddings in the heterogeneous graph. In particular, global contrastive learning enhances the robustness of node embeddings against noise by aligning global representations of the original graph and its augmented counterpart. Local contrastive learning enforces representation consistency between functionally similar or connected nodes across diverse data sources, effectively leveraging data heterogeneity and mitigating the issue of data scarcity. The refined node representations are applied to downstream tasks, such as MDA, MTI, and DGA prediction. Experiments show GlaHGCL outperforming state-of-the-art methods, and case studies further demonstrate its ability to accurately uncover new associations among miRNAs, genes, and diseases. We have made the datasets and source code publicly available at https://github.com/Sue-syx/GlaHGCL. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. 基于数据与特征增强的自监督图表示学习方法.
- Author
-
许云峰 and 范贺荀
- Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
37. BotCL: a social bot detection model based on graph contrastive learning.
- Author
-
Li, Yan, Li, Zhenyu, Gong, Daofu, Hu, Qian, and Lu, Haoyu
- Subjects
GRAPH neural networks ,DATA augmentation ,ARTIFICIAL neural networks ,DIRECTED graphs ,COMPUTER network security ,BOTNETS - Abstract
The proliferation of social bots on social networks presents significant challenges to network security due to their malicious activities. While graph neural network models have shown promise in detecting social bots, acquiring a large number of high-quality labeled accounts remains challenging, impacting bot detection performance. To address this issue, we introduce BotCL, a social bot detection model that employs contrastive learning through data augmentation. Initially, we build a directed graph based on following/follower relationships, utilizing semantic, attribute, and structural features of accounts as initial node features. We then simulate account behaviors within the social network and apply two data augmentation techniques to generate multiple views of the directed graph. Subsequently, we encode the generated views using relational graph convolutional networks, achieving maximum homogeneity in node representations by minimizing the contrastive loss. Finally, node labels are predicted using Softmax. The proposed method augments data based on its distribution, showcasing robustness to noise. Extensive experimental results on Cresci-2015, Twibot-20, and Twibot-22 datasets demonstrate that our approach surpasses the state-of-the-art methods in terms of performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. MDGCL: Graph Contrastive Learning Framework with Multiple Graph Diffusion Methods.
- Author
-
Li, Yuqiang, Zhang, Yi, and Liu, Chun
- Abstract
In recent years, some classical graph contrastive learning(GCL) frameworks have been proposed to address the problem of sparse labeling of graph data in the real world. However, in node classification tasks, there are two obvious problems with existing GCL frameworks: first, the stochastic augmentation methods they adopt lose a lot of semantic information; second, the local–local contrasting mode selected by most frameworks ignores the global semantic information of the original graph, which limits the node classification performance of these frameworks. To address the above problems, this paper proposes a novel graph contrastive learning framework, MDGCL, which introduces two graph diffusion methods, Markov and PPR, and a deterministic–stochastic data augmentation strategy while retaining the local–local contrasting mode. Specifically, before using the two stochastic augmentation methods (FeatureDrop and EdgeDrop), MDGCL first uses two deterministic augmentation methods (Markov diffusion and PPR diffusion) to perform data augmentation on the original graph to increase the semantic information, this step ensures subsequent stochastic augmentation methods do not lose too much semantic information. Meanwhile, the diffusion matrices carried by the augmented views contain global semantic information of the original graph, allowing the framework to utilize the global semantic information while retaining the local-local contrasting mode, which further enhances the node classification performance of the framework. We conduct extensive comparative experiments on multiple benchmark datasets, and the results show that MDGCL outperforms the representative baseline frameworks on node classification tasks. Among them, compared with COSTA, MDGCL’s node classification accuracy has been improved by 1.07% and 0.41% respectively on two representative datasets, Amazon-Photo and Coauthor-CS. In addition, we also conduct ablation experiments on two datasets, Cora and CiteSeer, to verify the effectiveness of each improvement work of our framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Towards Robust Rumor Detection with Graph Contrastive and Curriculum Learning.
- Author
-
Zhuang, Wen-Ming, Chen, Chih-Yao, and Li, Cheng-Te
- Subjects
SOCIAL media ,RUMOR ,GRAPH neural networks ,REPRESENTATIONS of graphs - Abstract
Establishing a robust rumor detection model is vital in safeguarding the veracity of information on social media platforms. However, existing approaches to stopping rumor from spreading rely on abundant and clean training data, which is rarely available in real-world scenarios. In this work, we aim to develop a trustworthy rumor detection model that can handle inadequate and noisy labeled data. Our work addresses robust rumor detection, including classic and early detection, as well as five types of robustness issues: noisy and incomplete propagation, label scarcity and noise, and user disappearance. We propose a novel method, Robustness-Enhanced Rumor Detection (RERD), which mainly leverages the information propagation graphs of source tweets, along with user profiles and retweeting knowledge, for model learning. The novelty of RERD is four-fold. First, we jointly exploit the propagation structures of non-text and text retweets to learn the representation of a source tweet. Second, we simultaneously utilize the top-down and bottom-up information flows with relational propagations for graph representation learning. Third, to have effective early and robust detection, we implement contrastive learning on graphs with early and complete views of information propagation so that small snapshots can foresee their future shapes. Last, we use curriculum pseudo-labeling to mitigate the impact of label scarcity and noisy labels, and to correct representations learned from corrupted data. Experimental results on three benchmark datasets demonstrate that RERD consistently outperforms competitors in classic, early, and robust rumor detection scenarios. To the best of our knowledge, we are the first to simultaneously cope with early and five robust detections of rumors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Boosting graph contrastive learning via adaptive graph augmentation and topology-feature-level homophily
- Author
-
Sun, Shuo, Zhao, Zhongying, Liu, Gen, Zhang, Qiqi, and Su, Lingtao
- Published
- 2024
- Full Text
- View/download PDF
41. Graph contrastive learning with high-order feature interactions and adversarial Wasserstein-distance-based alignment
- Author
-
Wang, Chenxu, Wan, Zhizhong, Meng, Panpan, Wang, Shihao, and Wang, Zhanggong
- Published
- 2024
- Full Text
- View/download PDF
42. Joint data augmentations for automated graph contrastive learning and forecasting
- Author
-
Jiaqi Liu, Yifu Chen, Qianqian Ren, and Yang Gao
- Subjects
Graph contrastive learning ,Data augmentation ,Forecasting ,Electronic computers. Computer science ,QA75.5-76.95 ,Information technology ,T58.5-58.64 - Abstract
Abstract Graph augmentation plays a crucial role in graph contrastive learning. However, existing methods primarily optimize augmentations specific to particular datasets, which limits their robustness and generalization capabilities. To overcome these limitations, many studies have explored automated graph data augmentations. However, these approaches face challenges due to weak labels and data incompleteness. To tackle these challenges, we propose an innovative framework called Joint Data Augmentations for Automated Graph Contrastive Learning (JDAGCL). The proposed model first integrates two augmenters: a feature-level augmenter and an edge-level augmenter. The two augmenters learn whether to drop an edge or node to obtain optimized graph structures and enrich the information available for modeling and forecasting task. Moreover, we introduce two stage training strategy to further process the features extracted by the encoder and enhance their effectiveness for forecasting downstream task. The experimental results demonstrate that our proposed model JDAGCL achieves state-of-the-art performance compared to the latest baseline methods, with an average improvement of 14% in forecasting accuracy across multiple benchmark datasets.
- Published
- 2024
- Full Text
- View/download PDF
43. Accurately deciphering spatial domains for spatially resolved transcriptomics with stCluster.
- Author
-
Wang, Tao, Shu, Han, Hu, Jialu, Wang, Yongtian, Chen, Jing, Peng, Jiajie, and Shang, Xuequn
- Subjects
- *
TRANSCRIPTOMES , *GRAPH neural networks , *GENE expression , *SOURCE code - Abstract
Spatial transcriptomics provides valuable insights into gene expression within the native tissue context, effectively merging molecular data with spatial information to uncover intricate cellular relationships and tissue organizations. In this context, deciphering cellular spatial domains becomes essential for revealing complex cellular dynamics and tissue structures. However, current methods encounter challenges in seamlessly integrating gene expression data with spatial information, resulting in less informative representations of spots and suboptimal accuracy in spatial domain identification. We introduce stCluster, a novel method that integrates graph contrastive learning with multi-task learning to refine informative representations for spatial transcriptomic data, consequently improving spatial domain identification. stCluster first leverages graph contrastive learning technology to obtain discriminative representations capable of recognizing spatially coherent patterns. Through jointly optimizing multiple tasks, stCluster further fine-tunes the representations to be able to capture complex relationships between gene expression and spatial organization. Benchmarked against six state-of-the-art methods, the experimental results reveal its proficiency in accurately identifying complex spatial domains across various datasets and platforms, spanning tissue, organ, and embryo levels. Moreover, stCluster can effectively denoise the spatial gene expression patterns and enhance the spatial trajectory inference. The source code of stCluster is freely available at https://github.com/hannshu/stCluster. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. ProtoMGAE: Prototype-Aware Masked Graph Auto-Encoder for Graph Representation Learning.
- Author
-
Zheng, Yimei and Jia, Caiyan
- Subjects
REPRESENTATIONS of graphs ,TASK performance ,GRAPH algorithms - Abstract
Graph self-supervised representation learning has gained considerable attention and demonstrated remarkable efficacy in extracting meaningful representations from graphs, particularly in the absence of labeled data. Two representative methods in this domain are graph auto-encoding and graph contrastive learning. However, the former methods primarily focus on global structures, potentially overlooking some fine-grained information during reconstruction. The latter methods emphasize node similarity across correlated views in the embedding space, potentially neglecting the inherent global graph information in the original input space. Moreover, handling incomplete graphs in real-world scenarios, where original features are unavailable for certain nodes, poses challenges for both types of methods. To alleviate these limitations, we integrate masked graph auto-encoding and prototype-aware graph contrastive learning into a unified model to learn node representations in graphs. In our method, we begin by masking a portion of node features and utilize a specific decoding strategy to reconstruct the masked information. This process facilitates the recovery of graphs from a global or macro level and enables handling incomplete graphs easily. Moreover, we treat the masked graph and the original one as a pair of contrasting views, enforcing the alignment and uniformity between their corresponding node representations at a local or micro level. Last, to capture cluster structures from a meso level and learn more discriminative representations, we introduce a prototype-aware clustering consistency loss that is jointly optimized with the preceding two complementary objectives. Extensive experiments conducted on several datasets demonstrate that the proposed method achieves significantly better or competitive performance on downstream tasks, especially for graph clustering, compared with the state-of-the-art methods, showcasing its superiority in enhancing graph representation learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. scZAG: Integrating ZINB-Based Autoencoder with Adaptive Data Augmentation Graph Contrastive Learning for scRNA-seq Clustering.
- Author
-
Zhang, Tianjiao, Ren, Jixiang, Li, Liangyu, Wu, Zhenao, Zhang, Ziheng, Dong, Guanghui, and Wang, Guohua
- Subjects
- *
DATA augmentation , *DEEP learning , *REPRESENTATIONS of graphs , *RNA sequencing , *GENE expression , *DATA analysis - Abstract
Single-cell RNA sequencing (scRNA-seq) is widely used to interpret cellular states, detect cell subpopulations, and study disease mechanisms. In scRNA-seq data analysis, cell clustering is a key step that can identify cell types. However, scRNA-seq data are characterized by high dimensionality and significant sparsity, presenting considerable challenges for clustering. In the high-dimensional gene expression space, cells may form complex topological structures. Many conventional scRNA-seq data analysis methods focus on identifying cell subgroups rather than exploring these potential high-dimensional structures in detail. Although some methods have begun to consider the topological structures within the data, many still overlook the continuity and complex topology present in single-cell data. We propose a deep learning framework that begins by employing a zero-inflated negative binomial (ZINB) model to denoise the highly sparse and over-dispersed scRNA-seq data. Next, scZAG uses an adaptive graph contrastive representation learning approach that combines approximate personalized propagation of neural predictions graph convolution (APPNPGCN) with graph contrastive learning methods. By using APPNPGCN as the encoder for graph contrastive learning, we ensure that each cell's representation reflects not only its own features but also its position in the graph and its relationships with other cells. Graph contrastive learning exploits the relationships between nodes to capture the similarity among cells, better representing the data's underlying continuity and complex topology. Finally, the learned low-dimensional latent representations are clustered using Kullback–Leibler divergence. We validated the superior clustering performance of scZAG on 10 common scRNA-seq datasets in comparison to existing state-of-the-art clustering methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. TP-GCL: graph contrastive learning from the tensor perspective.
- Author
-
Mingyuan Li, Lei Meng, Zhonglin Ye, Yanglin Yang, Shujuan Cao, Yuzhi Xiao, and Haixing Zhao
- Subjects
GRAPH neural networks ,GRAPH labelings - Abstract
Graph Neural Networks (GNNs) have demonstrated significant potential as powerful tools for handling graph data in various fields. However, traditional GNNs often encounter limitations in information capture and generalization when dealing with complex and high-order graph structures. Concurrently, the sparse labeling phenomenon in graph data poses challenges in practical applications. To address these issues, we propose a novel graph contrastive learning method, TP-GCL, based on a tensor perspective. The objective is to overcome the limitations of traditional GNNs in modeling complex structures and addressing the issue of sparse labels. Firstly, we transform ordinary graphs into hypergraphs through clique expansion and employ high-order adjacency tensors to represent hypergraphs, aiming to comprehensively capture their complex structural information. Secondly, we introduce a contrastive learning framework, using the original graph as the anchor, to further explore the differences and similarities between the anchor graph and the tensorized hypergraph. This process effectively extracts crucial structural features from graph data. Experimental results demonstrate that TP-GCL achieves significant performance improvements compared to baseline methods across multiple public datasets, particularly showcasing enhanced generalization capabilities and effectiveness in handling complex graph structures and sparse labeled data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Debiased graph contrastive learning based on positive and unlabeled learning.
- Author
-
Li, Zhiqiang, Wang, Jie, and Liang, Jiye
- Abstract
Graph contrastive learning (GCL) is one of the mainstream techniques for unsupervised graph representation learning, which reduces the distance between positive pairs and increases the distance between negative pairs in the embedding space to obtain discriminative representation. However, most existing GCL methods mainly focus on graph augmentation for positive samples, while the effect of negative samples is less explored, and they regard all samples except anchors as negative samples, which may push away latent positive samples that belong to the same class as the anchor, thus introducing false negative samples. To this end, this paper proposes a novel framework called Debiased Graph Contrastive Learning Based on Positive and Unlabeled Learning (DGCL-PU). Firstly, in this framework, we cluster the nodes by using the K-means algorithm and then treat the samples that are the same as the anchor as positive samples and the others as unlabelled samples. In this way, PU learning can be applied to assign scores for samples to determine the propensity of becoming negative samples relative to the anchor point. Secondly, we integrate the similarity between samples and the negative propensity score of samples, thereby obtaining reasonable weights for negative samples. Finally, the weighted graph contrastive loss is designed to obtain more discriminative feature representations and alleviate the bias of false negative samples. Moreover, DGCL-PU is a general framework, and it can be embedded into most existing GCL methods to improve their performance. Experiments on multiple benchmark datasets demonstrate that our method achieves state-of-the-art performance on multiple downstream tasks, including graph node classification, node clustering, and link prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A Lightweight Method for Graph Neural Networks Based on Knowledge Distillation and Graph Contrastive Learning.
- Author
-
Wang, Yong and Yang, Shuqun
- Subjects
GRAPH neural networks ,KNOWLEDGE graphs ,REPRESENTATIONS of graphs ,KNOWLEDGE transfer ,GRAPH algorithms - Abstract
Graph neural networks (GNNs) are crucial tools for processing non-Euclidean data. However, due to scalability issues caused by the dependency and topology of graph data, deploying GNNs in practical applications is challenging. Some methods aim to address this issue by transferring GNN knowledge to MLPs through knowledge distillation. However, distilled MLPs cannot directly capture graph structure information and rely only on node features, resulting in poor performance and sensitivity to noise. To solve this problem, we propose a lightweight optimization method for GNNs that combines graph contrastive learning and variable-temperature knowledge distillation. First, we use graph contrastive learning to capture graph structural representations, enriching the input information for the MLP. Then, we transfer GNN knowledge to the MLP using variable temperature knowledge distillation. Additionally, we enhance both node content and structural features before inputting them into the MLP, thus improving its performance and stability. Extensive experiments on seven datasets show that the proposed KDGCL model outperforms baseline models in both transductive and inductive settings; in particular, the KDGCL model achieves an average improvement of 1.63% in transductive settings and 0.8% in inductive settings when compared to baseline models. Furthermore, KDGCL maintains parameter efficiency and inference speed, making it competitive in terms of performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Video Summarization Generation Network Based on Dynamic Graph Contrastive Learning and Feature Fusion.
- Author
-
Zhang, Jing, Wu, Guangli, Bi, Xinlong, and Cui, Yulong
- Subjects
VIDEO summarization ,GRAPH neural networks ,FEATURE extraction ,GRAPH algorithms ,FRUIT extracts - Abstract
Video summarization aims to analyze the structure and content of videos and extract key segments to construct summarization that can accurately summarize the main content, allowing users to quickly access the core information without browsing the full video. However, existing methods have difficulties in capturing long-term dependencies when dealing with long videos. On the other hand, there is a large amount of noise in graph structures, which may lead to the influence of redundant information and is not conducive to the effective learning of video features. To solve the above problems, we propose a video summarization generation network based on dynamic graph contrastive learning and feature fusion, which mainly consists of three modules: feature extraction, video encoder, and feature fusion. Firstly, we compute the shot features and construct a dynamic graph by using the shot features as nodes of the graph and the similarity between the shot features as the weights of the edges. In the video encoder, we extract the temporal and structural features in the video using stacked L-G Blocks, where the L-G Block consists of a bidirectional long short-term memory network and a graph convolutional network. Then, the shallow-level features are obtained after processing by L-G Blocks. In order to remove the redundant information in the graph, graph contrastive learning is used to obtain the optimized deep-level features. Finally, to fully exploit the feature information of the video, a feature fusion gate using the gating mechanism is designed to fully fuse the shallow-level features with the deep-level features. Extensive experiments are conducted on two benchmark datasets, TVSum and SumMe, and the experimental results show that our proposed method outperforms most of the current state-of-the-art video summarization methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Self-supervised Learning with Adaptive Graph Structure and Function Representation for Cross-Dataset Brain Disorder Diagnosis
- Author
-
Chen, Dongdong, Yao, Linlin, Liu, Mengjun, Shen, Zhenrong, Hu, Yuqi, Song, Zhiyun, Wang, Qian, Zhang, Lichi, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Linguraru, Marius George, editor, Dou, Qi, editor, Feragen, Aasa, editor, Giannarou, Stamatia, editor, Glocker, Ben, editor, Lekadir, Karim, editor, and Schnabel, Julia A., editor
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.