Back to Search Start Over

Compositionality-Aware Graph2Seq Learning

Authors :
Itoh, Takeshi D.
Kubo, Takatomi
Ikeda, Kazushi
Publication Year :
2022

Abstract

Graphs are a highly expressive data structure, but it is often difficult for humans to find patterns from a complex graph. Hence, generating human-interpretable sequences from graphs have gained interest, called graph2seq learning. It is expected that the compositionality in a graph can be associated to the compositionality in the output sequence in many graph2seq tasks. Therefore, applying compositionality-aware GNN architecture would improve the model performance. In this study, we adopt the multi-level attention pooling (MLAP) architecture, that can aggregate graph representations from multiple levels of information localities. As a real-world example, we take up the extreme source code summarization task, where a model estimate the name of a program function from its source code. We demonstrate that the model having the MLAP architecture outperform the previous state-of-the-art model with more than seven times fewer parameters than it.<br />8 pages, 1 figure, 2 tables

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....bc9d446fce4d92ba4b8f10215e0ee16a