Multimodal Multilevel Fusion for Sequential Protective Behavior Detection and Pain Estimation

2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020)(2020)

Cited 10|Views2
No score
Abstract
In this paper, we present our approach to the FG 2020 EmoPain Challenge for tasks 2 (pain estimation) and 3 (protective behavior detection) from multimodal movement data. We propose to perform sequential protective behavior detection and pain estimation using human movement information. First, we predict the existence of pain, and then use this information along with the multimodal movement data for protective behavior detection. Finally, this information is fused to estimate level of pain. In this work, we apply both early fusion (feature fusion including metadata, modalities, exercises and probabilities) and post-fusion (decision fusion). The proposed approach is encouraging, as it outperforms the baseline, with high margin for both pain estimation and protective behavior detection on the EmoPain challenge 2020 dataset.
More
Translated text
Key words
multimodal multilevel fusion,protective behavior detection,pain estimation,multimodal movement data,human movement information,feature fusion,metadata,decision fusion
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined