Back to Search Start Over

Refiner: Refining Self-attention for Vision Transformers

Authors :
Zhou, Daquan
Shi, Yujun
Kang, Bingyi
Yu, Weihao
Jiang, Zihang
Li, Yuan
Jin, Xiaojie
Hou, Qibin
Feng, Jiashi
Publication Year :
2021

Abstract

Vision Transformers (ViTs) have shown competitive accuracy in image classification tasks compared with CNNs. Yet, they generally require much more data for model pre-training. Most of recent works thus are dedicated to designing more complex architectures or training methods to address the data-efficiency issue of ViTs. However, few of them explore improving the self-attention mechanism, a key factor distinguishing ViTs from CNNs. Different from existing works, we introduce a conceptually simple scheme, called refiner, to directly refine the self-attention maps of ViTs. Specifically, refiner explores attention expansion that projects the multi-head attention maps to a higher-dimensional space to promote their diversity. Further, refiner applies convolutions to augment local patterns of the attention maps, which we show is equivalent to a distributed local attention features are aggregated locally with learnable kernels and then globally aggregated with self-attention. Extensive experiments demonstrate that refiner works surprisingly well. Significantly, it enables ViTs to achieve 86% top-1 classification accuracy on ImageNet with only 81M parameters.

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....9b538f4f49daf57f7c34ca41e91cb71c