Abstract
Machine Translation (MT) and Computer-Assisted Translation (CAT) are considered complementary: the first one taking care of the translation process automatically and the latter getting the aid of human translators in order to get better translation outputs. With the demand for high quality translations, combining machine translation with computer assisted translation has drawn attention in current research. This combines
two prospects: the opportunity of ensuring high quality translation along with a significant performance gain.
Automatic Speech Recognition (ASR) is another important area, which caters important functionalities in language processing and natural language understanding tasks. In this work we integrate automatic speech recognition and machine translation in parallel. We aim to avoid manual typing of possible translations as dictating the translation would take less time than typing, making the translation process faster. The
spoken translation is analyzed and combined with the machine translation output of the same sentence using different methods. We study a number of different translation models in the context of n-best list rescoring methods. As an alternative to the n-best list rescoring, we also use word graphs with the expectation of arriving at a tighter integration of ASR and MT models. Integration methods include constraining ASR models using language and translation models of MT, and vice versa.
We currently develop and experiment different methods on the Danish – English language pair, with the use of a speech corpora and parallel text. The methods are investigated to check ways that the accuracy of the spoken translation of the translator can be increased with the use of machine translation outputs, which would be useful for potential computer-assisted translation systems.
two prospects: the opportunity of ensuring high quality translation along with a significant performance gain.
Automatic Speech Recognition (ASR) is another important area, which caters important functionalities in language processing and natural language understanding tasks. In this work we integrate automatic speech recognition and machine translation in parallel. We aim to avoid manual typing of possible translations as dictating the translation would take less time than typing, making the translation process faster. The
spoken translation is analyzed and combined with the machine translation output of the same sentence using different methods. We study a number of different translation models in the context of n-best list rescoring methods. As an alternative to the n-best list rescoring, we also use word graphs with the expectation of arriving at a tighter integration of ASR and MT models. Integration methods include constraining ASR models using language and translation models of MT, and vice versa.
We currently develop and experiment different methods on the Danish – English language pair, with the use of a speech corpora and parallel text. The methods are investigated to check ways that the accuracy of the spoken translation of the translator can be increased with the use of machine translation outputs, which would be useful for potential computer-assisted translation systems.
Originalsprog | Engelsk |
---|---|
Publikationsdato | 2014 |
Antal sider | 1 |
Status | Udgivet - 2014 |
Begivenhed | 2014 CRITT - WCRE Conference: Translation in Transition: Between Cognition, Computing and Technology - Copenhagen Business School, Frederiksberg, Danmark Varighed: 30 jan. 2014 → 31 jan. 2014 http://bridge.cbs.dk/platform/?q=conference2014 |
Konference
Konference | 2014 CRITT - WCRE Conference |
---|---|
Lokation | Copenhagen Business School |
Land/Område | Danmark |
By | Frederiksberg |
Periode | 30/01/2014 → 31/01/2014 |
Internetadresse |