Back to Search Start Over

Condensing Multilingual Knowledge with Lightweight Language-Specific Modules

Authors :
Xu, Haoran
Tan, Weiting
Li, Shuyue Stella
Chen, Yunmo
Van Durme, Benjamin
Koehn, Philipp
Murray, Kenton
Publication Year :
2023

Abstract

Incorporating language-specific (LS) modules is a proven method to boost performance in multilingual machine translation. This approach bears similarity to Mixture-of-Experts (MoE) because it does not inflate FLOPs. However, the scalability of this approach to hundreds of languages (experts) tends to be unmanageable due to the prohibitive number of parameters introduced by full-rank matrices in fully-connected layers. In this work, we introduce the Language-Specific Matrix Synthesis (LMS) method. This approach constructs LS modules by generating low-rank matrices from two significantly smaller matrices to approximate the full-rank matrix. Furthermore, we condense multilingual knowledge from multiple LS modules into a single shared module with the Fuse Distillation (FD) technique to improve the efficiency of inference and model serialization. We show that our LMS method significantly outperforms previous LS methods and MoE methods with the same amount of extra parameters, e.g., 1.73 BLEU points over the Switch Transformer on many-to-many multilingual machine translation. Importantly, LMS is able to have comparable translation performance with much fewer parameters.<br />Comment: Accepted at the main conference of EMNLP 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.13993
Document Type :
Working Paper