Machine Translation Errors and the Translation Process: A Study across Different Languages

Michael Carl, María Cristina Toledo Báez

Research output: Contribution to journalJournal articleResearchpeer-review


The paper describes an experiment in which two groups of translators annotate Spanish and simplified Chinese MT output of the same English source texts (ST) using an MQMderived annotation schema. Annotators first fragmented the ST and MT output (i.e. the target text TT) into alignment groups (AGs) and then labelled the AGs with an error code. We investigate the inter-annotator agreement of the AGs and their error annotations. Then, we correlate the average error agreement (i.e. the MT error evidence) with translation process data that we collected during the translation production of the same English texts in previous studies. We find that MT accuracy errors with higher errorevidence scores have an effect on the production and reading durations during postediting. We also find that that from-scratch translation is more difficult for ST words which have more evident MT accuracy errors. Surprisingly, Spanish MT accuracy errors also correlate with total ST reading time for translations (post-editing and from-scratch translation) into very different languages. We conclude that expressions with MT accuracy issues into one language (English-to-Spanish) are likely to be difficult to translate also into other languages for humans and for computers – while this does not hold for MT fluency errors.
Original languageEnglish
JournalJournal of Specialised Translation
Issue number31
Pages (from-to)107-132
Number of pages26
Publication statusPublished - Jan 2019
Externally publishedYes


  • Translation quality assessment
  • Machine error annotation
  • Inter-rater agreement
  • Post-editing
  • From-scratch translation
  • Translation accuracy
  • Translation effort
  • Translation modes

Cite this