Back to Search Start Over

Too Brittle To Touch: Comparing the Stability of Quantization and Distillation Towards Developing Lightweight Low-Resource MT Models

Authors :
Diddee, Harshita
Dandapat, Sandipan
Choudhury, Monojit
Ganu, Tanuja
Bali, Kalika
Diddee, Harshita
Dandapat, Sandipan
Choudhury, Monojit
Ganu, Tanuja
Bali, Kalika
Publication Year :
2022

Abstract

Leveraging shared learning through Massively Multilingual Models, state-of-the-art machine translation models are often able to adapt to the paucity of data for low-resource languages. However, this performance comes at the cost of significantly bloated models which are not practically deployable. Knowledge Distillation is one popular technique to develop competitive, lightweight models: In this work, we first evaluate its use to compress MT models focusing on languages with extremely limited training data. Through our analysis across 8 languages, we find that the variance in the performance of the distilled models due to their dependence on priors including the amount of synthetic data used for distillation, the student architecture, training hyperparameters and confidence of the teacher models, makes distillation a brittle compression mechanism. To mitigate this, we explore the use of post-training quantization for the compression of these models. Here, we find that while distillation provides gains across some low-resource languages, quantization provides more consistent performance trends for the entire range of languages, especially the lowest-resource languages in our target set.<br />Comment: 16 Pages, 7 Figures, Accepted to WMT 2022 (Research Track)

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1381577919
Document Type :
Electronic Resource