Artificial Intelligence, Trust, and Perceptions of Agency

Academy of Management Review(2024)

引用 3|浏览0
暂无评分
摘要
Extant theories of trust assume the trustee has agency (i.e. intentionality and free will). We propose that a crucial qualitative distinction between placing trust in Artificial Intelligence (AI) vs. trust in a human lies in the degree to which attributions of agency are made to the trustee by the trustor (human). We specify two mechanisms through which the extent of agency attributions can affect human trust in AI. First, the importance of the benevolence of the trustee—the AI—increases if the AI is seen as more agentic, but so does the anticipated psychological cost if it violates the trust (because of betrayal aversion, see Bohnet & Zeckhauser, 2004). Second, attributions of benevolence and competence become less relevant for placing confidence in a non-agentic seeming AI system, and instead benevolence and competence attributions to the designer of the system become important. Both mechanisms imply that making an AI appear more agentic may increase or decrease the trust that humans place in it. While designers of AI technology often strive to endow their creations with features that convey its benevolent nature (e.g. through anthropomorphizing or transparency), this may also change agency perceptions in a manner that results in making it less trustworthy in human eyes.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要