Predicting the Attitude Flow in Dialogue Based on Multi-Modal Speech Cues

Peter Juel Henrichsen, Jens Allwood

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review


    We present our experiments on attitude detection based on annotated multi-modal dialogue data1. Our long-term goal is to establish a computational model able to predict the attitudinal patterns in humanhuman dialogue. We believe, such prediction algorithms are useful tools in the pursuit of realistic discourse behavior in conversational agents and other intelligent man-machine interfaces. The present paper deals with two important subgoals in particular: How to establish a meaningful and consistent set of annotation categories for attitude annotation, and how to relate the annotation data to the
    recorded data (audio and video) in computational models of attitude prediction. We present our current results including a recommended set of analytical annotation labels and a recommended setup for extracting linguistically meaningful data even from noisy audio and video signals.
    Original languageEnglish
    Title of host publicationNEALT2012 : Proceedings of the 4th Nordic Symposium on Multimodal Communication, Nov. 15-16, Gothenburg, Sweden
    EditorsJens Allwood, Elisabeth Ahlsén, Patrizia Paggio, Kristiina Jokinen
    Place of PublicationGöteborg
    PublisherGöteborg Universitet
    Publication date2013
    Publication statusPublished - 2013
    EventThe 4th Nordic Symposium on Multimodal Communication - University of Gothenburg, Gothenburg, Sweden
    Duration: 15 Nov 201216 Nov 2012
    Conference number: 4


    ConferenceThe 4th Nordic Symposium on Multimodal Communication
    LocationUniversity of Gothenburg
    SeriesLinköping Electronic Conference Proceedings

    Cite this