Back to Search Start Over

Enhancing Translation Accuracy of Large Language Models through Continual Pre-Training on Parallel Data

Authors :
Kondo, Minato
Utsuro, Takehito
Nagata, Masaaki
Publication Year :
2024

Abstract

In this paper, we propose a two-phase training approach where pre-trained large language models are continually pre-trained on parallel data and then supervised fine-tuned with a small amount of high-quality parallel data. To investigate the effectiveness of our proposed approach, we conducted continual pre-training with a 3.8B-parameter model and parallel data across eight different formats. We evaluate these methods on thirteen test sets for Japanese-to-English and English-to-Japanese translation. The results demonstrate that when utilizing parallel data in continual pre-training, it is essential to alternate between source and target sentences. Additionally, we demonstrated that the translation accuracy improves only for translation directions where the order of source and target sentences aligns between continual pre-training data and inference. In addition, we demonstrate that the LLM-based translation model is more robust in translating spoken language and achieves higher accuracy with less training data compared to supervised encoder-decoder models. We also show that the highest accuracy is achieved when the data for continual pre-training consists of interleaved source and target sentences and when tags are added to the source sentences.<br />Comment: IWSLT2024, 18 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.03145
Document Type :
Working Paper