谷歌浏览器插件
订阅小程序
在清言上使用

System and method for recognizing human emotion state based on analysis of speech and facial feature extraction; applications to human-robot interaction

2016 4th International Conference on Robotics and Mechatronics (ICROM)(2016)

引用 4|浏览4
暂无评分
摘要
Humanoid robot and Artificial Intelligence make a proper mutual communication to interact with ordinary people. Advanced research in emotional interaction field will be based on the growing understanding, speech recognition and facial expression processes. Software specified for human-robot interaction needs to receive human emotions and optimize their behavior. In this paper, we report the results obtained from an exploratory study on software which automatically recognizes and classifies basic emotional states (sadness, surprise, happiness, anger, fear and disgust). The study consists of generating and analyzing the graphs of speech signals with using: pitch, intensity and formant properties of emotive speech. Indeed, facial feature extraction phase uses the mathematical formulation and measures a set of Action Units (AUs) for emotion classification. The efficiency of the methodology was evaluated by experimental tests on 300 individuals (150 females and 150 males, 20 to 48 years old) multi-ethnic groups, namely: (i) European, (ii) Asian Middle East and (iii) American. In the light of the experiments, the accuracy of the proposed model for emotional detection time was calculated 2.53 s as we defined primarily more distinct boundaries between emotions to classify features in a set of basic emotions.
更多
查看译文
关键词
Action Units,facial expression,pitch,speech recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要