Back to Search Start Over

Are Powerful Graph Neural Nets Necessary? A Dissection on Graph Classification

Authors :
Chen, Ting
Bian, Song
Sun, Yizhou
Publication Year :
2019

Abstract

Graph Neural Nets (GNNs) have received increasing attentions, partially due to their superior performance in many node and graph classification tasks. However, there is a lack of understanding on what they are learning and how sophisticated the learned graph functions are. In this work, we propose a dissection of GNNs on graph classification into two parts: 1) the graph filtering, where graph-based neighbor aggregations are performed, and 2) the set function, where a set of hidden node features are composed for prediction. To study the importance of both parts, we propose to linearize them separately. We first linearize the graph filtering function, resulting Graph Feature Network (GFN), which is a simple lightweight neural net defined on a \textit{set} of graph augmented features. Further linearization of GFN's set function results in Graph Linear Network (GLN), which is a linear function. Empirically we perform evaluations on common graph classification benchmarks. To our surprise, we find that, despite the simplification, GFN could match or exceed the best accuracies produced by recently proposed GNNs (with a fraction of computation cost), while GLN underperforms significantly. Our results demonstrate the importance of non-linear set function, and suggest that linear graph filtering with non-linear set function is an efficient and powerful scheme for modeling existing graph classification benchmarks.<br />Comment: A shorter version titled "Graph Feature Networks" was accepted to ICLR'19 RLGM workshop. code available at https://github.com/chentingpc/gfn

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1905.04579
Document Type :
Working Paper