Abstract
Researchers are proposing interactive machine translation as a potential method to make language translation process more efficient and usable. Introduction of different modalities like eye gaze and speech are being explored to add to the interactivity of language translation system. Unfortunately, the raw data provided by Automatic Speech Recognition (ASR) and Eye-Tracking is very noisy and erroneous. This paper describes a technique for reducing the errors of the two modalities, speech and eye-gaze with the help of each other in context of sight translation and reading. Lattice representation and composition of the two modalities was used for integration. F-measure for Eye-Gaze and Word Accuracy for ASR were used as metrics to evaluate our results. In reading task, we demonstrated a significant improvement in both Eye-Gaze f-measure and speech Word Accuracy. In sight translation task, significant improvement was found in gaze f-measure but not in ASR.
Original language | English |
---|---|
Publication date | 2013 |
Number of pages | 6 |
Publication status | Published - 2013 |
Event | 10th International Conference on Natural Language Processing - Centre for Development of Advanced Computing, Noida, India Duration: 18 Dec 2013 → 20 Dec 2013 Conference number: 10 http://ltrc.iiit.ac.in/icon/2013/index.php |
Conference
Conference | 10th International Conference on Natural Language Processing |
---|---|
Number | 10 |
Location | Centre for Development of Advanced Computing |
Country/Territory | India |
City | Noida |
Period | 18/12/2013 → 20/12/2013 |
Internet address |