An audiovisual and contextual approach for categorical and continuous emotion recognition in-the-wild.

IEEE International Conference on Computer Vision(2021)

引用 25|浏览15
暂无评分
摘要
In this work we tackle the task of video-based audiovisual emotion recognition, within the premises of the 2nd Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW2). Poor illumination conditions, head/body orientation and low image resolution constitute factors that can potentially hinder performance in case of methodologies that solely rely on the extraction and analysis of facial features. In order to alleviate this problem, we leverage both bodily and contextual features, as part of a broader emotion recognition framework. We choose to use a standard CNN-RNN cascade as the backbone of our proposed model for sequence-to-sequence (seq2seq) learning. Apart from learning through the RGB input modality, we construct an aural stream which operates on sequences of extracted mel-spectrograms. Our extensive experiments on the challenging and newly assembled Aff-Wild2 dataset verify the validity of our intuitive multi-stream and multi-modal approach towards emotion recognition "in-the-wild". Emphasis is being laid on the the beneficial influence of the human body and scene context, as aspects of the emotion recognition process that have been left relatively unexplored up to this point. All the code was implemented using PyTorch(1) and is publicly available(2).
更多
查看译文
关键词
audiovisual approach,contextual approach,categorical emotion recognition,continuous emotion recognition,video-based audio-visual emotion recognition,Affective Behavior Analysis in-the-wild,ABAW2,low image resolution,facial features,bodily features,contextual features,standard CNN-RNN cascade,sequence-to-sequence learning,seq2seq,RGB input modality,aural stream,extracted mel-spectrograms,intuitive multistream,multimodal approach,human body,assembled Aff-Wild2 dataset
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要