1. Inferring local topology via variational convolution for graph representation
- Author
-
Jingyi HOU, Yuxin TANG, Xinbo YU, and Zhijie LIU
- Subjects
graph attention network ,local topology ,variational inference ,convolutional neural network ,hybrid neural network ,Mining engineering. Metallurgy ,TN1-997 ,Environmental engineering ,TA170-171 - Abstract
The development of deep learning techniques and support of big data computing power have revolutionized graph representation research by facilitating the implementation of the learning of different graph neural network structures. Existing methods, such as graph attention networks, mainly focus on global information propagation in graph neural networks, which have theoretically proven their strong representation capability. However, these general methods lack flexible representation mechanisms when facing graph data with local topology involving specific semantics, such as functional groups in the chemical reaction. Accordingly, it is of great importance to further exploit the local structure representations for graph-based tasks. Several existing methods either use domain expert knowledge or conduct subgraph isomorphism counting to learn local topology representations of graphs. However, there is no guarantee that these methods can easily be generalized to different domains without specific knowledge or complex substructure preprocessing. In this study, we propose a simple and automatic local topology inference method that uses variational convolutions to improve the local representation ability of graph attention networks. The proposed method not only considers the relationship reasoning and message passing on the global graph structure but also adaptively learns the graph’s local structure representations with the guidance of statistical priors that can be readily accessible. To be more specific, the variational inference is used to adaptively learn the convolutional template size, and the inference is conducted layer-by-layer with the guidance of the statistical priors to make the convolutional template size adaptable to multiple subgraphs with different structures in a self-supervised way. The variational convolution module is easily pluggable and can be concatenated with arbitrary hidden layers of any graph neural network. In contrast, due to the locality of the convolution operations, the relations between graph nodes can be further sparse to alleviate the over-squeezing problem in the global information propagation of the graph neural network. As a result, the proposed method can significantly improve the overall representation ability of the graph attention network using the variational inference of the convolutional operations for local topology representation. Experiments are conducted on three large-scale and publicly available datasets, i.e., the OGBG-MolHIV, USPTO, and Buchwald-Hartwig datasets. Experimental results show that exploiting various kinds of local topological information helps improve the performance of the graph attention network.
- Published
- 2023
- Full Text
- View/download PDF