Back to Search Start Over

Dynamic Sparse Attention for Scalable Transformer Acceleration.

Authors :
Liu, Liu
Qu, Zheng
Chen, Zhaodong
Tu, Fengbin
Ding, Yufei
Xie, Yuan
Source :
IEEE Transactions on Computers; Dec2022, Vol. 71 Issue 12, p3165-3178, 14p
Publication Year :
2022

Abstract

Transformers are the mainstream of NLP applications and are becoming increasingly popular in other domains such as Computer Vision. Despite the improvements in model quality, the enormous computation costs make Transformers difficult at deployment, especially when the sequence length is large in emerging applications. Processing attention mechanism as the essential component of Transformer is the bottleneck of execution due to the quadratic complexity. Prior art explores sparse patterns in attention to support long sequence modeling, but those pieces of work are on static or fixed patterns. We demonstrate that the sparse patterns are dynamic, depending on input sequences. Thus, we propose the Dynamic Sparse Attention (DSA) that can efficiently exploit dynamic sparse patterns in attention. Compared with other methods, our approach can achieve better trade-offs between accuracy and model complexity. Moving forward, we identify challenges and provide solutions to implement DSA on existing hardware (GPUs) and specialized hardware in order to achieve practical speedup and efficiency improvements for Transformer execution. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189340
Volume :
71
Issue :
12
Database :
Complementary Index
Journal :
IEEE Transactions on Computers
Publication Type :
Academic Journal
Accession number :
160620899
Full Text :
https://doi.org/10.1109/TC.2022.3208206