Back to Search Start Over

Towards document-level human MT evaluation: on the issues of annotator agreement, effort and misevaluation

Authors :
Castilho, Sheila
Source :
Castilho, Sheila ORCID: 0000-0002-8416-6555 (2021) Towards document-level human MT evaluation: on the issues of annotator agreement, effort and misevaluation. In: 16th Conference of the European Chapter of the Association for Computational Linguistics-EACL 2021., 19-23 April 2021, Online.
Publication Year :
2021
Publisher :
Association for Computational Linguistics (ACL), 2021.

Abstract

Document-level human evaluation of machine translation (MT) has been raising interest in the community. However, little is known about the issues of using document-level methodologies to assess MT quality. In this article, we compare the inter-annotator agreement (IAA) scores, the effort to assess the quality in different document-level methodologies, and the issue of misevaluation when sentences are evaluated out of context.

Details

Language :
English
Database :
OpenAIRE
Journal :
Castilho, Sheila ORCID: 0000-0002-8416-6555 <https://orcid.org/0000-0002-8416-6555> (2021) Towards document-level human MT evaluation: on the issues of annotator agreement, effort and misevaluation. In: 16th Conference of the European Chapter of the Association for Computational Linguistics-EACL 2021., 19-23 April 2021, Online.
Accession number :
edsair.od.......119..b01c7ea10b664dcabf5b292f9b8940d6