Automatic Evaluation of Machine Translation: Correlating Post-editing Effort and Translation Edit Rate (TER) Scores

Mercedes Garcia Martinez, Arlene Koglin, Bartolomé Mesa-Lao, Michael Carl

    Publikation: Bidrag til bog/antologi/rapportKonferenceabstrakt i proceedingsForskningpeer review


    The availability of systems capable of producing fairly accurate translations has increased the popularity of machine translation (MT). The translation industry is steadily incorporating MT in their workflows engaging the human translator to post-edit the raw MT output in order to comply with a set of quality criteria in as few edits as possible. The quality of MT systems is generally measured by automatic metrics, producing scores that should correlate with human evaluation.In this study, we investigate correlations between one of such metrics, i.e. Translation Edit Rate (TER), and actual post-editing effort as it is shown in post-editing process data collected under experimental conditions. Using the CasMaCat workbench as a post-editing tool, process data were collected using keystrokes and eye-tracking data from five professional translators under two different conditions: i) traditional post-editing and ii) interactive post-editing. In the second condition, as the user types, the MT system suggests alternative target translations which the post-editor can interactively accept or overwrite, whereas in the first condition no aids are provided to the user while editing the raw MT output. Each one of the five participants was asked to post-edit 12 different texts using the interactivity provided by the system and 12 additional texts without interactivity (i.e. traditional post-editing) over a period of 6 weeks.Process research in post-editing is often grounded on three different but related categories of post-editing effort, namely i) temporal (time), ii) cognitive (mental processes) and iii) technical (keyboard activity). For the purposes of this research, TER scores were correlated with two different indicators of post-editing effort as computed in the CRITT Translation Process Database (TPR-DB) *. On the one hand, post-editing temporal effort was measured using FDur values (duration of segment production time excluding keystroke pauses >_ 200 seconds) and KDur values (duration of coherent keyboard activity excluding keystroke pauses >_ 5 seconds). On the other hand, post-editing technical effort was measured using Mdel values (number of manually generated deletions) and Mins values (number of manually generated insertions).Results show that TER scores have a positive correlation with actual post-editing effort as reflected in the form of manual insertions and deletions (Mins/Mdel) as well as time to perform the task (KDur/FDur).
    TitelBooks of Abstracts of the 5th IATIS Conference : Innovation Paths in Translation and Intercultural Studies
    RedaktørerFábio Alves, Adriana Silvina Pagano, Arthur de Melo Sá, Kícila Ferreguetti
    Antal sider1
    UdgivelsesstedBelo Horizonte
    ForlagInternational Association for Translation and Intercultural Studies. IATIS
    StatusUdgivet - 2015
    BegivenhedIATIS 5th International Conference: Innovation Paths in Translation and Intercultural Studies - Belo Horizonte, Brasilien
    Varighed: 7 jul. 201510 jul. 2015
    Konferencens nummer: 5


    KonferenceIATIS 5th International Conference
    ByBelo Horizonte