Neuro-computational mechanisms of action-outcome learning under moral conflict

biorxiv(2021)

引用 0|浏览10
暂无评分
摘要
Predicting how actions result in conflicting outcomes for self and others is essential for social functioning. We tested whether Reinforcement Learning Theory captures how participants learn to choose between symbols that define a moral conflict between financial self-gain and other-pain. We tested whether choices are better explained by model-free learning (decisions based on combined historical values of past outcomes), or model-based learning (decisions based on the current value of separately expected outcomes) by including trials in which participants know that either self-gain or other-pain will not be delivered. Some participants favored options benefiting themselves, others, preventing other-pain. When removing the favored outcome, participants instantly altered their choices, suggesting model-based learning. Computational modelling confirmed choices were best described by model-based learning in which participants track expected values of self-gain and other-pain separately, with an individual valuation parameter capturing their relative weight. This valuation parameter predicted costly helping in an independent task. The expectations of self-gain and other-pain were also biased: the favoured outcome was associated with more differentiated symbol-outcome probability reports than the less favoured outcome. FMRI helped localize this bias: signals in the pain-observation network covaried with pain prediction errors without linear dependency on individual preferences, while the ventromedial prefrontal cortex contained separable signals covarying with pain prediction errors in ways that did and did not reflected individual preferences. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
关键词
reinforcement learning theory,prosocial behavior,empathy,pain,reward,mirror neurons,simulation theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要