Back to Search Start Over

Pretrained Language Models for Document-Level Neural Machine Translation

Authors :
Li, Liangyou
Jiang, Xin
Liu, Qun
Publication Year :
2019

Abstract

Previous work on document-level NMT usually focuses on limited contexts because of degraded performance on larger contexts. In this paper, we investigate on using large contexts with three main contributions: (1) Different from previous work which pertrained models on large-scale sentence-level parallel corpora, we use pretrained language models, specifically BERT, which are trained on monolingual documents; (2) We propose context manipulation methods to control the influence of large contexts, which lead to comparable results on systems using small and large contexts; (3) We introduce a multi-task training for regularization to avoid models overfitting our training corpora, which further improves our systems together with a deeper encoder. Experiments are conducted on the widely used IWSLT data sets with three language pairs, i.e., Chinese--English, French--English and Spanish--English. Results show that our systems are significantly better than three previously reported document-level systems.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1911.03110
Document Type :
Working Paper