Emotion Dependent Facial Animation from Affective Speech

2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)(2020)

Cited 0|Views22
No score
Abstract
In human-to-computer interaction, facial animation in synchrony with affective speech can deliver more naturalistic conversational agents. In this paper, we present a two-stage deep learning approach for affective speech driven facial shape animation. In the first stage, we classify affective speech into seven emotion categories. In the second stage, we train separate deep estimators within each emotion category to synthesize facial shape from the affective speech. Objective and subjective evaluations are performed over the SAVEE dataset. The proposed emotion dependent facial shape model performs better in terms of the Mean Squared Error (MSE) loss and in generating the landmark animations, as compared to training a universal model regardless of the emotion.
More
Translated text
Key words
emotion categories,affective speech,emotion dependent facial shape model,emotion dependent facial animation,two-stage deep learning,facial shape animation,mean squared error,SAVEE dataset,landmark animations
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined