Back to Search Start Over

Accelerating attention mechanism on FPGAs based on efficient reconfigurable systolic array

Authors :
Wenhua Ye
Xu Zhou
Joey TianYi Zhou
Cen Chen
Kenli Li
Source :
ACM Transactions on Embedded Computing Systems.
Publication Year :
2022
Publisher :
Association for Computing Machinery (ACM), 2022.

Abstract

Transformer model architectures have recently received great interest in natural language, machine translation, and computer vision, where attention mechanisms are their building blocks. However, the attention mechanism is expensive because of its intensive matrix computations and complicated data flow. The existing hardware architecture has some disadvantages for the computing structure of attention, such as inflexibility and low efficiency. Most of the existing papers accelerate attention by reducing the amount of computation through various pruning algorithms, which will affect the results in a certain extent with different sparsity. This paper proposes the hardware accelerator for the multi-head attention (MHA) on field-programmable gate arrays (FPGAs) with reconfigurable architecture, efficient systolic array, and hardware-friendly radix-2 softmax. We propose a novel method called Four inputs Processing Element(FPE) to double the computation rate of the data-aware systolic array (SA) and make it efficient and load balance. Especially, the computation framework is well designed to ensure the utilization of SA efficiently. Our design is evaluated on a Xilinx Alveo U250 card, and the proposed architecture achieves 51.3×, 17.3× improvement in latency, and 54.4×, 17.9× energy savings compared to CPU and GPU.

Subjects

Subjects :
Hardware and Architecture
Software

Details

ISSN :
15583465 and 15399087
Database :
OpenAIRE
Journal :
ACM Transactions on Embedded Computing Systems
Accession number :
edsair.doi...........69e8d829b15591b6950b5acc366d034b
Full Text :
https://doi.org/10.1145/3549937