Back to Search Start Over

HiP: Hierarchical Perceiver

Authors :
Carreira, Joao
Koppula, Skanda
Zoran, Daniel
Recasens, Adria
Ionescu, Catalin
Henaff, Olivier
Shelhamer, Evan
Arandjelovic, Relja
Botvinick, Matt
Vinyals, Oriol
Simonyan, Karen
Zisserman, Andrew
Jaegle, Andrew
Carreira, Joao
Koppula, Skanda
Zoran, Daniel
Recasens, Adria
Ionescu, Catalin
Henaff, Olivier
Shelhamer, Evan
Arandjelovic, Relja
Botvinick, Matt
Vinyals, Oriol
Simonyan, Karen
Zisserman, Andrew
Jaegle, Andrew
Publication Year :
2022

Abstract

General perception systems such as Perceivers can process arbitrary modalities in any combination and are able to handle up to a few hundred thousand inputs. They achieve this generality by using exclusively global attention operations. This however hinders them from scaling up to the inputs sizes required to process raw high-resolution images or video. In this paper, we show that some degree of locality can be introduced back into these models, greatly improving their efficiency while preserving their generality. To scale them further, we introduce a self-supervised approach that enables learning dense low-dimensional positional embeddings for very large signals. We call the resulting model a Hierarchical Perceiver (HiP). In sum our contributions are: 1) scaling Perceiver-type models to raw high-resolution images and audio+video, 2) showing the feasibility of learning 1M+ positional embeddings from scratch using masked auto-encoding, 3) demonstrating competitive performance on raw data from ImageNet, AudioSet, PASCAL VOC, ModelNet40 and Kinetics datasets with the same exact, unchanged model and without specialized preprocessing or any tokenization.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1333751997
Document Type :
Electronic Resource