Implicit Media Tagging And Affect Prediction From Rgb-D Video Of Spontaneous Facial Expressions

2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017)(2017)

引用 3|浏览3
暂无评分
摘要
We present a method that automatically evaluates emotional response from spontaneous facial activity. The automatic evaluation of emotional response, or affect, is a fascinating challenge with many applications. Our approach is based on the inferred activity of facial muscles over time, as automatically obtained from an RGB-D video recording of spontaneous facial activity. Our contribution is two-fold: First, we constructed a database of publicly available short video clips, which elicit a strong emotional response in a consistent manner across different individuals. Each video was tagged by its characteristic emotional response along 4 scales: Valence, Arousal, Likability and Rewatch (the desire to watch again). The second contribution is a two-step prediction method, based on learning, which was trained and tested using this database of tagged video clips. Our method was able to successfully predict the aforementioned 4 dimensional representation of affect, achieving high correlation (0.87-0.95) between the predicted scores and the affect tags. As part of the prediction algorithm we identified the period of strongest emotional response in the viewing recordings, in a method that was blind to the video clip being watched, showing high agreement between independent viewers. Finally, inspection of the relative contribution of different feature types to the prediction process revealed that temporal facets contributed more to the prediction of individual affect than to media tags.
更多
查看译文
关键词
implicit media tagging,spontaneous facial expressions,emotional response automatic evaluation,RGB-D video recording,facial muscle inferred activity,short video clips,tagged video clips,aforementioned 4-dimensional representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要