Predicting the Attitude Flow in Dialogue Based on Multi-Modal Speech Cues

Peter Juel Henrichsen, Jens Allwood

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    Abstract

    We present our experiments on attitude detection based on annotated multi-modal dialogue data1. Our long-term goal is to establish a computational model able to predict the attitudinal patterns in humanhuman dialogue. We believe, such prediction algorithms are useful tools in the pursuit of realistic discourse behavior in conversational agents and other intelligent man-machine interfaces. The present paper deals with two important subgoals in particular: How to establish a meaningful and consistent set of annotation categories for attitude annotation, and how to relate the annotation data to the
    recorded data (audio and video) in computational models of attitude prediction. We present our current results including a recommended set of analytical annotation labels and a recommended setup for extracting linguistically meaningful data even from noisy audio and video signals.
    Original languageEnglish
    Title of host publicationNEALT2012 : Proceedings of the 4th Nordic Symposium on Multimodal Communication, Nov. 15-16, Gothenburg, Sweden
    EditorsJens Allwood, Elisabeth Ahlsén, Patrizia Paggio, Kristiina Jokinen
    Place of PublicationGöteborg
    PublisherGöteborg Universitet
    Publication date2013
    Pages47-53
    Publication statusPublished - 2013
    EventThe 4th Nordic Symposium on Multimodal Communication - University of Gothenburg, Gothenburg, Sweden
    Duration: 15 Nov 201216 Nov 2012
    Conference number: 4

    Conference

    ConferenceThe 4th Nordic Symposium on Multimodal Communication
    Number4
    LocationUniversity of Gothenburg
    Country/TerritorySweden
    CityGothenburg
    Period15/11/201216/11/2012
    SeriesLinköping Electronic Conference Proceedings
    Number93
    ISSN1650-3686

    Cite this