Back to Search Start Over

Graph Masked Autoencoders with Transformers

Authors :
Zhang, Sixiao
Chen, Hongxu
Yang, Haoran
Sun, Xiangguo
Yu, Philip S.
Xu, Guandong
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

Recently, transformers have shown promising performance in learning graph representations. However, there are still some challenges when applying transformers to real-world scenarios due to the fact that deep transformers are hard to train from scratch and the quadratic memory consumption w.r.t. the number of nodes. In this paper, we propose Graph Masked Autoencoders (GMAEs), a self-supervised transformer-based model for learning graph representations. To address the above two challenges, we adopt the masking mechanism and the asymmetric encoder-decoder design. Specifically, GMAE takes partially masked graphs as input, and reconstructs the features of the masked nodes. The encoder and decoder are asymmetric, where the encoder is a deep transformer and the decoder is a shallow transformer. The masking mechanism and the asymmetric design make GMAE a memory-efficient model compared with conventional transformers. We show that, when serving as a conventional self-supervised graph representation model, GMAE achieves state-of-the-art performance on both the graph classification task and the node classification task under common downstream evaluation protocols. We also show that, compared with training in an end-to-end manner from scratch, we can achieve comparable performance after pre-training and fine-tuning using GMAE while simplifying the training process.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....6d7659f7adbd9fb2d8ec79366cdad27d
Full Text :
https://doi.org/10.48550/arxiv.2202.08391