ECMER: Edge-Cloud Collaborative Personalized Multimodal Emotion Recognition Framework in the Internet of Vehicles

IEEE NETWORK(2023)

引用 0|浏览5
暂无评分
摘要
Real-time driver emotion recognition and timely risk warning can effectively reduce the incidence of traffic accidents. However, existing emotion recognition methods obtain emotion features from human physiological signals and are unsuitable for complex scenarios in the Internet of Vehicles (loV). Moreover, the existing methods in the IoV cannot fully use the resources of edge devices for mining the driver's personalities, resulting in limited accuracy. To address the problem, we propose a novel Edge-Cloud Collaborative Multimodal Emotion Recognition Framework (ECMER). The driver's facial expression and audio data are loaded to the edge for preliminary calculation, including coarse-grained facial expression recognition and driver's personality features extraction, which is uploaded to the cloud for cross-fusion. Specifically, a personality-coupled driver's emotion recognition method is proposed, and the Big Five Model is introduced from the psychological perspective. The facial expression features contained in images and audio features in videos are employed to calculate the driver's personality features, which are further fused with multimodal features. Subsequently, a hierarchical multi-granularity driver emotion recognition method is designed where the real-time coarsegran-ularity driver emotion recognition is conducted by edge devices to reduce the data transmission pressure and cloud computing load. The empirical results on real-world datasets demonstrate that the performance of driver emotion recognition under this architecture is improved.
更多
查看译文
关键词
Emotion recognition,Cloud computing,Computational modeling,Image edge detection,Collaboration,Computer architecture,Real-time systems,Connected vehicles,Autonomous vehicles
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要