Abstract
While most machine translation evaluation techniques (BLEU, NIST, TER, METEOR) assess translation quality based on a set of reference translations, we suggest to evaluate the literality of a set of (human or machine generated) translations to infer their potential quality. We provide evidence which suggests that more literal translations are produced more easily, by humans and machine, and are also less error prone. Literal translations may not be appropriate or even possible for all languages, types of texts, and translation purposes. However, in this paper we show that an assessment of the literality of translations allows us to (1) evaluate human and machine translations in a similar fashion and (2) may be instrumental to predict machine translation quality scores,
Original language | English |
---|---|
Title of host publication | Proceedings of the Workshop on Automatic and Manual Metrics for Operational Translation Evaluation. MTE 2014 |
Editors | Keith J. Miller, Lucia Specia, Kim Harris, Stacey Bailey |
Place of Publication | Paris |
Publisher | European Language Resources Association |
Publication date | 2014 |
Pages | 45-50 |
Publication status | Published - 2014 |
Event | The Workshop on Automatic and Manual Metrics for Operational Translation Evaluation. MTE 2014 - Harpa Conference Centre , Reykjavik, Iceland Duration: 26 May 2014 → 26 May 2014 http://mte2014.github.io/ |
Workshop
Workshop | The Workshop on Automatic and Manual Metrics for Operational Translation Evaluation. MTE 2014 |
---|---|
Location | Harpa Conference Centre |
Country/Territory | Iceland |
City | Reykjavik |
Period | 26/05/2014 → 26/05/2014 |
Other | Held in connection to the LREC 2014: The 9th edition of the Language Resources and Evaluation Conference |
Internet address |