Back to Search Start Over

FAST: Factorizable Attention for Speeding up Transformers

Authors :
Gerami, Armin
Hoover, Monte
Dulepet, Pranav S.
Duraiswami, Ramani
Publication Year :
2024

Abstract

Motivated by the factorization inherent in the original fast multipole method and the improved fast Gauss transform we introduce a factorable form of attention that operates efficiently in high dimensions. This approach reduces the computational and memory complexity of the attention mechanism in transformers from $O(N^2)$ to $O(N)$. In comparison to previous attempts, our work presents a linearly scaled attention mechanism that maintains the full representation of the attention matrix without compromising on sparsification and incorporates the all-to-all relationship between tokens. We explore the properties of our new attention metric and conduct tests in various standard settings. Results indicate that our attention mechanism has a robust performance and holds significant promise for diverse applications where self-attention is used.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.07901
Document Type :
Working Paper