Emotion Recognition Based on Decoupling the Spatial Context from the Temporal Dynamics of Facial Expressions

2019 International Symposium on Networks, Computers and Communications (ISNCC)(2019)

引用 0|浏览1
暂无评分
摘要
This paper presents an emotion recognition approach based on decoupling the spatial context from the temporal dynamics of facial expressions in video sequences. In particular, each emotional state is represented as a set of temporal phases, where each temporal phase exhibits different temporal dynamics such as the expressing speed and the variable length of each phase. In this work, we have developed an algorithm for automatically detecting the temporal phases of human facial expressions by employing the concept of mutual information to define a similarity measure among different video frames. Moreover, we have developed a two-layer framework for emotional state recognition. The first layer utilizes the spatial context to classify the frames in an input video into emotional-specific temporal phases using a support vector machine classifier. In the second layer, dynamic time warping is used to classify the sequence of labels associated with the video frames, which is generated in the first layer, into a specific emotional state. In order to validate the performance of the proposed approach, we have conducted extensive computer simulations and the results show an average classification accuracy of 93.53% using the extended Cohn-Kanade facial-expression database.
更多
查看译文
关键词
Emotion recognition,temporal phases detection,support vector machines,mutual information
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要