Chrome Extension
WeChat Mini Program
Use on ChatGLM

ASTDF-Net: Attention-Based Spatial-Temporal Dual-Stream Fusion Network for EEG-Based Emotion Recognition

PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023(2023)

Cited 0|Views15
No score
Abstract
Emotion recognition based on electroencephalography (EEG) has attracted significant attention and achieved considerable advances in the fields of affective computing and human-computer interaction. However, most existing studies ignore the coupling and complementarity of complex spatiotemporal patterns in EEG signals. Moreover, how to exploit and fuse crucial discriminative aspects in high redundancy and low signal-to-noise ratio EEG signals remains a great challenge for emotion recognition. In this paper, we propose a novel attention-based spatial-temporal dual-stream fusion network, named ASTDF-Net, for EEG-based emotion recognition. Specifically, ASTDF-Net comprises three main stages: first, the collaborative embedding module is designed to learn a joint latent subspace to capture the coupling of complicated spatiotemporal information in EEG signals. Second, stacked parallel spatial and temporal attention streams are employed to extract the most essential discriminative features and filter out redundant task-irrelevant factors. Finally, the hybrid attention-based feature fusion module is proposed to integrate significant features discovered from the dual-stream structure to take full advantage of the complementarity of the diverse characteristics. Extensive experiments on two publicly available emotion recognition datasets indicate that our proposed approach consistently outperforms state-of-the-art methods.
More
Translated text
Key words
Emotion Recognition,EEG,Affective Computing,Neural Network,Attention
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined