1. Simplicial Attention Neural Networks
- Author
-
Giusti, L., Battiloro, C., Di Lorenzo, P., Sardellitti, S., Barbarossa, S., Giusti, L., Battiloro, C., Di Lorenzo, P., Sardellitti, S., and Barbarossa, S.
- Abstract
The aim of this work is to introduce simplicial attention networks (SANs), i.e., novel neural architectures that operate on data defined on simplicial complexes leveraging masked self-attentional layers. Hinging on formal arguments from topological signal processing, we introduce a proper self-attention mechanism able to process data components at different layers (e.g., nodes, edges, triangles, and so on), while learning how to weight both upper and lower neighborhoods of the given topological domain in a totally task-oriented fashion. The proposed SANs generalize most of the current architectures available for processing data defined on simplicial complexes. The proposed approach compares favorably with other methods when applied to different (inductive and transductive) tasks such as trajectory prediction and missing data imputations in citation complexes., Comment: In V2, we change the title in Simplicial Attention Neural Networks, since we discovered the paper 1 that shares the same title of V1 and was available on OpenReview a few days before our first submission. In V2, we cite 1, clarifying the several differences with our method and adding extensive numerical comparisons. 1 Christopher W. et al., Simplicial attention networks. Avbl on OpenReview
- Published
- 2022