Back to Search Start Over

When to Pre-Train Graph Neural Networks? From Data Generation Perspective!

Authors :
Cao Y
Xu J
Yang C
Wang J
Zhang Y
Wang C
Chen L
Yang Y
Source :
KDD : proceedings. International Conference on Knowledge Discovery & Data Mining [KDD] 2023 Aug; Vol. 2023, pp. 142-153. Date of Electronic Publication: 2023 Aug 04.
Publication Year :
2023

Abstract

In recent years, graph pre-training has gained significant attention, focusing on acquiring transferable knowledge from unlabeled graph data to improve downstream performance. Despite these recent endeavors, the problem of negative transfer remains a major concern when utilizing graph pre-trained models to downstream tasks. Previous studies made great efforts on the issue of what to pre-train and how to pre-train by designing a variety of graph pre-training and fine-tuning strategies. However, there are cases where even the most advanced "pre-train and fine-tune" paradigms fail to yield distinct benefits. This paper introduces a generic framework W2PGNN to answer the crucial question of when to pre-train ( i.e ., in what situations could we take advantage of graph pre-training) before performing effortful pre-training or fine-tuning. We start from a new perspective to explore the complex generative mechanisms from the pre-training data to downstream data. In particular, W2PGNN first fits the pre-training data into graphon bases, each element of graphon basis ( i.e ., a graphon) identifies a fundamental transferable pattern shared by a collection of pre-training graphs. All convex combinations of graphon bases give rise to a generator space, from which graphs generated form the solution space for those downstream data that can benefit from pre-training. In this manner, the feasibility of pre-training can be quantified as the generation probability of the downstream data from any generator in the generator space. W2PGNN offers three broad applications: providing the application scope of graph pre-trained models, quantifying the feasibility of pre-training, and assistance in selecting pre-training data to enhance downstream performance. We provide a theoretically sound solution for the first application and extensive empirical justifications for the latter two applications.

Details

Language :
English
ISSN :
2154-817X
Volume :
2023
Database :
MEDLINE
Journal :
KDD : proceedings. International Conference on Knowledge Discovery & Data Mining
Publication Type :
Academic Journal
Accession number :
38333106
Full Text :
https://doi.org/10.1145/3580305.3599548