Artificial Agents In Natural Moral Communities: A Brief Clarification

CAMBRIDGE QUARTERLY OF HEALTHCARE ETHICS(2021)

引用 4|浏览0
暂无评分
摘要
What exactly is it that makes one morally responsible? Is it a set of facts which can be objectively discerned, or is it something more subjective, a reaction to the agent or context-sensitive interaction? This debate gets raised anew when we encounter newfound examples of potentially marginal agency. Accordingly, the emergence of artificial intelligence (AI) and the idea of "novel beings" represent exciting opportunities to revisit inquiries into the nature of moral responsibility. This paper expands upon my article "Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible" and clarifies my reliance upon two competing views of responsibility. Although AI and novel beings are not close enough to us in kind to be considered candidates for the same sorts of responsibility we ascribe to our fellow human beings, contemporary theories show us the priority and adaptability of our moral attitudes and practices. This allows us to take seriously the social ontology of relationships that tie us together. In other words, moral responsibility is to be found primarily in the natural moral community, even if we admit that those communities now contain artificial agents.
更多
查看译文
关键词
moral responsibility, moral agency, blame, machine ethics, artificial intelligence, human-robot interaction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要