RLCA: Reinforcement Learning Model Integrating Cognition and Affection for Empathetic Response Generation

Yun Su, Haoran Bian, Bozhen Fan, Bingxu Lian, Chengrong Zhang,Bingtao Zhang,Runhe Huang

IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS(2024)

引用 0|浏览25
暂无评分
摘要
Empathy is a crucial field of social science research. The current research on the dialog systems with empathy has two main limitations: 1) less adequate integration of the cognition and affection aspects of empathy for enhancing the perceptual and emotional expression abilities and 2) lack of the evaluation of the generated response at the sentence level in the training process for reducing the problem of exposure bias. Therefore, we proposed the reinforcement learning model integrating cognition and affection (RLCA) model that utilizes the RL framework integrating cognition and affection to evoke greater empathetic expression in the model. In particular, the cognitive response generator can reason commonsense information based on the user's situation to improve the perceptual capabilities of the proposed model. Moreover, the emotional regulator can mitigate the exposure bias problem by distilling multiple emotion signals from predicted responses and imparting higher emotional intelligence to the proposed model. Furthermore, the interaction between the cognition and affection aspects helps the model to learn the features of empathic expressions in human conversation. Extensive experimental findings on a benchmark dataset indicate that the RLCA outperforms the popular baseline models of automatic metrics and human evaluations while generating more interpretable empathetic responses.
更多
查看译文
关键词
Commonsense reasoning,Generators,Regulators,Context modeling,Computational modeling,Pediatrics,Reinforcement learning,Cognition and affection,empathetic dialog,reinforcement learning (RL)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要