Back to Search Start Over

A Length-Extrapolatable Transformer

Authors :
Sun, Yutao
Dong, Li
Patra, Barun
Ma, Shuming
Huang, Shaohan
Benhaim, Alon
Chaudhary, Vishrav
Song, Xia
Wei, Furu
Publication Year :
2022

Abstract

Position modeling plays a critical role in Transformers. In this paper, we focus on length extrapolation, i.e., training on short texts while evaluating longer sequences. We define attention resolution as an indicator of extrapolation. Then we propose two designs to improve the above metric of Transformers. Specifically, we introduce a relative position embedding to explicitly maximize attention resolution. Moreover, we use blockwise causal attention during inference for better resolution. We evaluate different Transformer variants with language modeling. Experimental results show that our model achieves strong performance in both interpolation and extrapolation settings. The code will be available at https://aka.ms/LeX-Transformer.<br />Comment: 9 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2212.10554
Document Type :
Working Paper