Artificial Moral Advisors: A New Perspective from Moral Psychology.

AAAI/ACM Conference on AI, Ethics, and Society (AIES)(2022)

引用 0|浏览0
暂无评分
摘要
Philosophers have recently put forward the possibility of achieving moral enhancement through artificial intelligence (e.g., Giubilini and Savulescu's version [32]), proposing various forms of "artificial moral advisor" (AMA) to help people make moral decisions without the drawbacks of human cognitive limitations. In this paper, we provide a new perspective on the AMA, drawing on empirical evidence from moral psychology to point out several challenges to these proposals that have been largely neglected by AI ethicists. In particular, we suggest that the AMA at its current conception is fundamentally misaligned with human moral psychology - it incorrectly assumes a static moral values framework underpinning the AMA's attunement to individual users, and people's reactions and subsequent (in)actions in response to the AMA suggestions will likely diverge substantially from expectations. As such, we note the necessity for a coherent understanding of human moral psychology in the future development of AMAs.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要