Why People Judge Humans Differently from Machines: The Role of Perceived Agency and Experience.

Jingling Zhang, Jane Conway,César A. Hidalgo

2023 14th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)(2023)

引用 0|浏览0
暂无评分
摘要
People are known to judge artificial intelligence using a utilitarian moral philosophy and humans using a moral philosophy emphasizing perceived intentions. But why do people judge humans and machines differently? Psychology suggests that people may have different mind perception models of humans and machines, and thus, will treat human-like robots more similarly to the way they treat humans. Here we present a randomized experiment where we manipulated people's perception of machine agency (e.g., ability to plan, act) and experience (e.g., ability to feel) to explore whether people judge machines that are perceived to be more similar to humans along these two dimensions more similarly to the way they judge humans. We find that people's judgments of machines become more similar to that of humans when they perceive machines as having more agency but not more experience. Our findings indicate that people's use of different moral philosophies to judge humans and machines can be explained by a progression of mind perception models where the perception of agency plays a prominent role. These findings add to the body of evidence suggesting that people's judgment of machines becomes more similar to that of humans motivating further work on dimensions modulating people's judgment of human and machine actions.
更多
查看译文
关键词
Perceptions Of People,Randomized Experiment,Perceptions Of Agency,People's Judgments,Moral Responsibility,Moral Judgment,State Machine,Mental Models,Anthropomorphic,Self-driving,Competency Framework,Differences In Judgments,Role Of Intention,Reactions Of People,Harmful Role,Trolley Problem
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要