Back to Search Start Over

Learning machine translation from in-domain and out-of-domain data

Authors :
Turchi, M.
Cyril Goutte
Cristianini, N.
Source :
Scopus-Elsevier

Abstract

The performance of Phrase-Based Statistical Machine Translation (PBSMT) systems mostly depends on training data. Many papers have investigated how to create new resources in order to increase the size of the training corpus in an attempt to improve PBSMT performance. In this work, we analyse and characterize the way in which the in-domain and outof- domain performance of PBSMT is impacted when the amount of training data increases. Two different PBSMT systems, Moses and Portage, two of the largest parallel corpora, Giga (French-English) and UN (Chinese-English) datasets and several in- and out-of-domain test sets were used to build high quality learning curves showing consistent logarithmic growth in performance. These results are stable across language pairs, PBSMT systems and domains. We also analyse the respective impact of additional training data for estimating the language and translation models. Our proposed model approximates learning curves very well and indicates the translation model contributes about 30% more to the performance gain than the language model.<br />16th Annual Conference of the European Association for Machine Translation (EAMT), 28-30 May 2012, Trento, Italy

Details

Database :
OpenAIRE
Journal :
Scopus-Elsevier
Accession number :
edsair.dedup.wf.001..bf1db7221e822f2bc4e838fe8e38fee9