Back to Search Start Over

Exploring Design Choices for Building Language-Specific LLMs

Authors :
Tejaswi, Atula
Gupta, Nilesh
Choi, Eunsol
Publication Year :
2024

Abstract

Despite rapid progress in large language models (LLMs), their performance on a vast majority of languages remain unsatisfactory. In this paper, we study building language-specific LLMs by adapting monolingual and multilingual LLMs. We conduct systematic experiments on how design choices (base model selection, vocabulary extension, and continued fine-tuning) impact the adapted LLM, both in terms of efficiency (how many tokens are needed to encode the same amount of information) and end task performance. We find that (1) the initial performance before the adaptation is not always indicative of the final performance. (2) Efficiency can easily improved with simple vocabulary extension and continued fine-tuning in most LLMs we study, and (3) The optimal adaptation method is highly language-dependent, and the simplest approach works well across various experimental settings. Adapting English-centric models can yield better results than adapting multilingual models despite their worse initial performance on low-resource languages. Together, our work lays foundations on efficiently building language-specific LLMs by adapting existing LLMs.<br />Comment: 15 pages, 6 figures, 11 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.14670
Document Type :
Working Paper