Back to Search Start Over

Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality

Authors :
Ganesan, Adithya V
Matero, Matthew
Ravula, Aravind Reddy
Vu, Huy
Schwartz, H. Andrew
Publication Year :
2021

Abstract

In human-level NLP tasks, such as predicting mental health, personality, or demographics, the number of observations is often smaller than the standard 768+ hidden state sizes of each layer within modern transformer-based language models, limiting the ability to effectively leverage transformers. Here, we provide a systematic study on the role of dimension reduction methods (principal components analysis, factorization techniques, or multi-layer auto-encoders) as well as the dimensionality of embedding vectors and sample sizes as a function of predictive performance. We first find that fine-tuning large models with a limited amount of data pose a significant difficulty which can be overcome with a pre-trained dimension reduction regime. RoBERTa consistently achieves top performance in human-level tasks, with PCA giving benefit over other reduction methods in better handling users that write longer texts. Finally, we observe that a majority of the tasks achieve results comparable to the best performance with just $\frac{1}{12}$ of the embedding dimensions.<br />Comment: 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2105.03484
Document Type :
Working Paper
Full Text :
https://doi.org/10.18653/v1/2021.naacl-main.357