Gesticulating with NAO: Real-time Context-Aware Co-Speech Gesture Generation for Human-Robot Interaction

ICMI '23 Companion: Companion Publication of the 25th International Conference on Multimodal Interaction(2023)

Cited 0|Views4
No score
Abstract
Humans naturally produce nonverbal behaviours via facial, body, and vocal expressions to signal their messages, intentions, and feelings to their interacting partners during interactions. Considering robots are progressively moving out from research laboratories into human environments, there is an increasing need for them to be able to develop similar social intelligence skills. Equipping robots with human nonverbal communication skills, therefore, has been an active research area, where data-driven, end-to-end learning approaches have become predominant, offering scalability and generalisability. However, most recent works only consider a single character for modelling intrapersonal dynamics without paying attention to the interacting partner’s behaviours. Our research aims to address the gap in the literature by introducing a generative framework allowing social robots to produce co-speech gestures to convey their speech in a scenario of real-time human-robot interaction. Notably, the system also considers non-verbal signals observed from the interacting partner as a conditional input for producing robots’ communicative gestures.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined