Back to Search Start Over

What should be encoded by position embedding for neural network language models?

Authors :
Yu, Shuiyuan
Zhang, Zihao
Liu, Haitao
Source :
Natural Language Engineering; Mar2024, Vol. 30 Issue 2, p294-318, 25p
Publication Year :
2024

Abstract

Word order is one of the most important grammatical devices and the basis for language understanding. However, as one of the most popular NLP architectures, Transformer does not explicitly encode word order. A solution to this problem is to incorporate position information by means of position encoding/embedding (PE). Although a variety of methods of incorporating position information have been proposed, the NLP community is still in want of detailed statistical researches on position information in real-life language. In order to understand the influence of position information on the correlation between words in more detail, we investigated the factors that affect the frequency of words and word sequences in large corpora. Our results show that absolute position, relative position, being at one of the two ends of a sentence and sentence length all significantly affect the frequency of words and word sequences. Besides, we observed that the frequency distribution of word sequences over relative position carries valuable grammatical information. Our study suggests that in order to accurately capture word–word correlations, it is not enough to focus merely on absolute and relative position. Transformers should have access to more types of position-related information which may require improvements to the current architecture. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13513249
Volume :
30
Issue :
2
Database :
Complementary Index
Journal :
Natural Language Engineering
Publication Type :
Academic Journal
Accession number :
176361462
Full Text :
https://doi.org/10.1017/S1351324923000128