Mutual Disambiguation of Eye Gaze and Speech for Sight Translation and Reading

Rucha Kulkarni , Kritika Jain, Himanshu Bansal, Srinivas Bangalore, Michael Carl

    Research output: Contribution to conferencePaperResearchpeer-review

    Abstract

    Researchers are proposing interactive machine translation as a potential method to make language translation process more efficient and usable. Introduction of different modalities like eye gaze and speech are being explored to add to the interactivity of language translation system. Unfortunately, the raw data provided by Automatic Speech Recognition (ASR) and Eye-Tracking is very noisy and erroneous. This paper describes a technique for reducing the errors of the two modalities, speech and eye-gaze with the help of each other in context of sight translation and reading. Lattice representation and composition of the two modalities was used for integration. F-measure for Eye-Gaze and Word Accuracy for ASR were used as metrics to evaluate our results. In reading task, we demonstrated a significant improvement in both Eye-Gaze f-measure and speech Word Accuracy. In sight translation task, significant improvement was found in gaze f-measure but not in ASR.
    Original languageEnglish
    Publication date2013
    Number of pages6
    Publication statusPublished - 2013
    Event10th International Conference on Natural Language Processing - Centre for Development of Advanced Computing, Noida, India
    Duration: 18 Dec 201320 Dec 2013
    Conference number: 10
    http://ltrc.iiit.ac.in/icon/2013/index.php

    Conference

    Conference10th International Conference on Natural Language Processing
    Number10
    LocationCentre for Development of Advanced Computing
    Country/TerritoryIndia
    CityNoida
    Period18/12/201320/12/2013
    Internet address

    Cite this