Multi-view domain-adaptive representation learning for EEG-based emotion recognition

INFORMATION FUSION(2024)

引用 0|浏览23
暂无评分
摘要
Current research suggests that there exist certain limitations in EEG emotion recognition, including redundant and meaningless time-frames and channels, as well as inter-and intra-individual differences in EEG signals from different subjects. To deal with these limitations, a Cross-attention-based Dilated Causal Convolutional Neural Network with Domain Discriminator (CADD-DCCNN) for multi-view EEG-based emotion recognition is proposed to minimize individual differences and automatically learn more discriminative emotion-related features. First, differential entropy (DE) features are obtained from the raw EEG signals using short-time Fourier transform (STFT). Second, each channel of the DE features is regarded as a view, and the attention mechanisms are utilized at different views to aggregate the discriminative affective information at the level of the time-frame of EEG. Then, a dilated causal convolutional neural network is employed to distill nonlinear relationships among different time frames. Next, a feature-level fusion is used to fuse features from multiple channels, aiming to explore the potential complementary information among different views and enhance the representational ability of the feature. Finally, to minimize individual differences, a domain discriminator is employed to generate domain-invariant features, which projects data from both the different domains into the same data representation space. We evaluated our proposed method on two public datasets, SEED and DEAP. The experimental results illustrate that our CADD-DCCNN method outperforms the SOTA methods.
更多
查看译文
关键词
Affective computing,Cross-attention,Domain adaptation,EEG,Emotion recognition,Multi-view learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要