Back to Search
Start Over
AdaGMLP: AdaBoosting GNN-to-MLP Knowledge Distillation
- Source :
- KDD 2024
- Publication Year :
- 2024
-
Abstract
- Graph Neural Networks (GNNs) have revolutionized graph-based machine learning, but their heavy computational demands pose challenges for latency-sensitive edge devices in practical industrial applications. In response, a new wave of methods, collectively known as GNN-to-MLP Knowledge Distillation, has emerged. They aim to transfer GNN-learned knowledge to a more efficient MLP student, which offers faster, resource-efficient inference while maintaining competitive performance compared to GNNs. However, these methods face significant challenges in situations with insufficient training data and incomplete test data, limiting their applicability in real-world applications. To address these challenges, we propose AdaGMLP, an AdaBoosting GNN-to-MLP Knowledge Distillation framework. It leverages an ensemble of diverse MLP students trained on different subsets of labeled nodes, addressing the issue of insufficient training data. Additionally, it incorporates a Node Alignment technique for robust predictions on test data with missing or incomplete features. Our experiments on seven benchmark datasets with different settings demonstrate that AdaGMLP outperforms existing G2M methods, making it suitable for a wide range of latency-sensitive real-world applications. We have submitted our code to the GitHub repository (https://github.com/WeigangLu/AdaGMLP-KDD24).<br />Comment: Accepted by KDD 2024
- Subjects :
- Computer Science - Machine Learning
Computer Science - Artificial Intelligence
Subjects
Details
- Database :
- arXiv
- Journal :
- KDD 2024
- Publication Type :
- Report
- Accession number :
- edsarx.2405.14307
- Document Type :
- Working Paper