Back to Search Start Over

Hydra Attention: Efficient Attention with Many Heads

Authors :
Bolya, Daniel
Fu, Cheng-Yang
Dai, Xiaoliang
Zhang, Peizhao
Hoffman, Judy
Publication Year :
2022

Abstract

While transformers have begun to dominate many tasks in vision, applying them to large images is still computationally difficult. A large reason for this is that self-attention scales quadratically with the number of tokens, which in turn, scales quadratically with the image size. On larger images (e.g., 1080p), over 60% of the total computation in the network is spent solely on creating and applying attention matrices. We take a step toward solving this issue by introducing Hydra Attention, an extremely efficient attention operation for Vision Transformers (ViTs). Paradoxically, this efficiency comes from taking multi-head attention to its extreme: by using as many attention heads as there are features, Hydra Attention is computationally linear in both tokens and features with no hidden constants, making it significantly faster than standard self-attention in an off-the-shelf ViT-B/16 by a factor of the token count. Moreover, Hydra Attention retains high accuracy on ImageNet and, in some cases, actually improves it.<br />Comment: Accepted CADL 2022 (ECCV Workshop)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2209.07484
Document Type :
Working Paper