Back to Search Start Over

Learning invariant representations in neural networks

Authors :
Fuchs, FB
Kumar, P
Bronstein, M
Posner, H
Publication Year :
2022

Abstract

Symmetries and invariances are ubiquitous in machine learning tasks. While convolutional neural networks famously and successfully exploit translational symmetries, other symmetries have, until recently, often been neglected. Incorporating symmetries or invariances into the neural network architecture avoids costly data augmentation and alleviates the need for large datasets. The presented work focuses on invariant and equivariant neural network layers, putting symmetries at the centre of neural network architecture design. Concretely, this thesis covers three different invariances: permutation invariance, roto-translation invariance, and label invariance. • For permutation invariance, this work presents an analysis of the capacity and constraints of sum aggregation, one of the most used permutation invariant neural network paradigms. Next, different permutation invariant relational reasoning modules in the context of multi-object tracking are proposed and compared. • For roto-translation invariance, a new roto-translation equivariant graph network based on self-attention is introduced. In a subsequent step, this is adapted to enable iterative position refinement, a crucial feature for protein structure prediction. • Lastly, the case is covered where there is no known symmetry group but instead a nuisance factor that the practitioner would like the neural network to be invariant to. We refer to this concept as label invariance. To that end, a neural network module is introduced that can promote or suppress the nuisance factor while simultaneously making the network more interpretable.

Subjects

Subjects :
Machine learning

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.od......1064..90d35f40f00a77b60efac331922837fe