Multimodal prediction of head nods in dyadic conversations

Signal Processing and Communications Applications Conference(2018)

Cited 23|Views26
No score
Abstract
Non-verbal expressions in human interactions carry important messages. These messages, which constitute a significant part of the information to be transferred, are not used effectively by machines in human-robot/agent interaction. In this study, the purpose is to predict the potential head nod moments for robot/agent and therefore to develop more human-like interfaces. To achieve this, acoustic feature extraction and social signal annotations are carried out on human-human dyadic conversations. A certain history window for each head nod instances are fed to binary classification. Consequently, upon the classification by Support Vector Machines, `potential head nod' or `no head nod' outputs are obtained. More than half of the head nods are successfully predicted as `potential head nod', which leads promising results for human-like robot/agents.
More
Translated text
Key words
Human-Computer Interaction,Head Nodding,Social Signal Processing,Non-verbal Expressions,Backhannels,Intention Recognition
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined