Back to Search Start Over

HyperEmbed: Tradeoffs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing enabled Embedding of n-gram Statistics

Authors :
Alonso, Pedro
Shridhar, Kumar
Kleyko, Denis
Osipov, Evgeny
Liwicki, Marcus
Source :
2021 International Joint Conference on Neural Networks (IJCNN)
Publication Year :
2020

Abstract

Recent advances in Deep Learning have led to a significant performance increase on several NLP tasks, however, the models become more and more computationally demanding. Therefore, this paper tackles the domain of computationally efficient algorithms for NLP tasks. In particular, it investigates distributed representations of n-gram statistics of texts. The representations are formed using hyperdimensional computing enabled embedding. These representations then serve as features, which are used as input to standard classifiers. We investigate the applicability of the embedding on one large and three small standard datasets for classification tasks using nine classifiers. The embedding achieved on par F1 scores while decreasing the time and memory requirements by several times compared to the conventional n-gram statistics, e.g., for one of the classifiers on a small dataset, the memory reduction was 6.18 times; while train and test speed-ups were 4.62 and 3.84 times, respectively. For many classifiers on the large dataset, memory reduction was ca. 100 times and train and test speed-ups were over 100 times. Importantly, the usage of distributed representations formed via hyperdimensional computing allows dissecting strict dependency between the dimensionality of the representation and n-gram size, thus, opening a room for tradeoffs.<br />Comment: 9 pages, 1 figure, 6 tables

Details

Database :
arXiv
Journal :
2021 International Joint Conference on Neural Networks (IJCNN)
Publication Type :
Report
Accession number :
edsarx.2003.01821
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/IJCNN52387.2021.9534359