Automatic Evaluation of Machine Translation: Correlating Post-editing Effort and Translation Edit Rate (TER) Scores

Mercedes Garcia Martinez, Arlene Koglin, Bartolomé Mesa-Lao, Michael Carl

    Research output: Chapter in Book/Report/Conference proceedingConference abstract in proceedingsResearchpeer-review

    Abstract

    The availability of systems capable of producing fairly accurate translations has increased the popularity of machine translation (MT). The translation industry is steadily incorporating MT in their workflows engaging the human translator to post-edit the raw MT output in order to comply with a set of quality criteria in as few edits as possible. The quality of MT systems is generally measured by automatic metrics, producing scores that should correlate with human evaluation.In this study, we investigate correlations between one of such metrics, i.e. Translation Edit Rate (TER), and actual post-editing effort as it is shown in post-editing process data collected under experimental conditions. Using the CasMaCat workbench as a post-editing tool, process data were collected using keystrokes and eye-tracking data from five professional translators under two different conditions: i) traditional post-editing and ii) interactive post-editing. In the second condition, as the user types, the MT system suggests alternative target translations which the post-editor can interactively accept or overwrite, whereas in the first condition no aids are provided to the user while editing the raw MT output. Each one of the five participants was asked to post-edit 12 different texts using the interactivity provided by the system and 12 additional texts without interactivity (i.e. traditional post-editing) over a period of 6 weeks.Process research in post-editing is often grounded on three different but related categories of post-editing effort, namely i) temporal (time), ii) cognitive (mental processes) and iii) technical (keyboard activity). For the purposes of this research, TER scores were correlated with two different indicators of post-editing effort as computed in the CRITT Translation Process Database (TPR-DB) *. On the one hand, post-editing temporal effort was measured using FDur values (duration of segment production time excluding keystroke pauses >_ 200 seconds) and KDur values (duration of coherent keyboard activity excluding keystroke pauses >_ 5 seconds). On the other hand, post-editing technical effort was measured using Mdel values (number of manually generated deletions) and Mins values (number of manually generated insertions).Results show that TER scores have a positive correlation with actual post-editing effort as reflected in the form of manual insertions and deletions (Mins/Mdel) as well as time to perform the task (KDur/FDur).
    Original languageEnglish
    Title of host publicationBooks of Abstracts of the 5th IATIS Conference : Innovation Paths in Translation and Intercultural Studies
    EditorsFábio Alves, Adriana Silvina Pagano, Arthur de Melo Sá, Kícila Ferreguetti
    Number of pages1
    Place of PublicationBelo Horizonte
    PublisherInternational Association for Translation and Intercultural Studies. IATIS
    Publication date2015
    Pages150
    Publication statusPublished - 2015
    EventIATIS 5th International Conference: Innovation Paths in Translation and Intercultural Studies - Belo Horizonte, Brazil
    Duration: 7 Jul 201510 Jul 2015
    Conference number: 5
    http://www.iatis.org/index.php/iatis-belo-horizonte-conference/itemlist/category/195-main-programme

    Conference

    ConferenceIATIS 5th International Conference
    Number5
    CountryBrazil
    CityBelo Horizonte
    Period07/07/201510/07/2015
    Internet address

    Cite this