Back to Search Start Over

Are Compressed Language Models Less Subgroup Robust?

Authors :
Gee, Leonidas
Zugarini, Andrea
Quadrianto, Novi
Source :
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Main Track
Publication Year :
2024

Abstract

To reduce the inference cost of large language models, model compression is increasingly used to create smaller scalable models. However, little is known about their robustness to minority subgroups defined by the labels and attributes of a dataset. In this paper, we investigate the effects of 18 different compression methods and settings on the subgroup robustness of BERT language models. We show that worst-group performance does not depend on model size alone, but also on the compression method used. Additionally, we find that model compression does not always worsen the performance on minority subgroups. Altogether, our analysis serves to further research into the subgroup robustness of model compression.<br />Comment: The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)

Details

Database :
arXiv
Journal :
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Main Track
Publication Type :
Report
Accession number :
edsarx.2403.17811
Document Type :
Working Paper
Full Text :
https://doi.org/10.18653/v1/2023.emnlp-main.983