Abstract
Researchers are proposing interactive machine translation as a potential method to make language translation process more efficient and usable. Introduction of different modalities like eye gaze and speech are being explored to add to the interactivity of language translation system. Unfortunately, the raw data provided by Automatic Speech Recognition (ASR) and Eye-Tracking is very noisy and erroneous. This paper describes a technique for reducing the errors of the two modalities, speech and eye-gaze with the help of each other in context of sight translation and reading. Lattice representation and composition of the two modalities was used for integration. F-measure for Eye-Gaze and Word Accuracy for ASR were used as metrics to evaluate our results. In reading task, we demonstrated a significant improvement in both Eye-Gaze f-measure and speech Word Accuracy. In sight translation task, significant improvement was found in gaze f-measure but not in ASR.
Original language | English |
---|---|
Title of host publication | GazeIn '13. Proceedings of the 6th Workshop on Eye Gaze in Intelligent Human Machine Interaction : Gaze in Multimodal Interaction |
Editors | Roman Bednarik , Hung-Hsuan Huang , Kristiina Jokinen , Yukiko I. Nakano |
Place of Publication | New York |
Publisher | Association for Computing Machinery |
Publication date | 2013 |
Pages | 35-40 |
ISBN (Print) | 9781450325639 |
DOIs | |
Publication status | Published - 2013 |
Event | The 6th Workshop on Eye Gaze in Intelligent Human Machine Interaction. GazeIn '13: Gaze in Multimodal Interaction - Sydney, Australia Duration: 13 Dec 2013 → 13 Dec 2013 Conference number: 6 http://cs.uef.fi/~rbednari/GazeIn2013/ |
Workshop
Workshop | The 6th Workshop on Eye Gaze in Intelligent Human Machine Interaction. GazeIn '13 |
---|---|
Number | 6 |
Country/Territory | Australia |
City | Sydney |
Period | 13/12/2013 → 13/12/2013 |
Other | Connected to at ACM ICMI 2013 |
Internet address |