Back to Search Start Over

Investigating the translation capabilities of Large Language Models trained on parallel data only

Authors :
Gilabert, Javier García
Escolano, Carlos
Savall, Aleix Sant
Fornaciari, Francesca De Luca
Mash, Audrey
Liao, Xixian
Melero, Maite
Publication Year :
2024

Abstract

In recent years, Large Language Models (LLMs) have demonstrated exceptional proficiency across a broad spectrum of Natural Language Processing (NLP) tasks, including Machine Translation. However, previous methods predominantly relied on iterative processes such as instruction fine-tuning or continual pre-training, leaving unexplored the challenges of training LLMs solely on parallel data. In this work, we introduce PLUME (Parallel Language Model), a collection of three 2B LLMs featuring varying vocabulary sizes (32k, 128k, and 256k) trained exclusively on Catalan-centric parallel examples. These models perform comparably to previous encoder-decoder architectures on 16 supervised translation directions and 56 zero-shot ones. Utilizing this set of models, we conduct a thorough investigation into the translation capabilities of LLMs, probing their performance, the impact of the different elements of the prompt, and their cross-lingual representation space.<br />Comment: We release our code at: https://github.com/projecte-aina/Plume

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.09140
Document Type :
Working Paper