Do Moral Robots Always Fail? Investigating Human Attitudes Towards Ethical Decisions Of Automated Systems

2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN)(2017)

引用 27|浏览6
暂无评分
摘要
Technological advances will soon make it possible for automated systems (such as vehicles or search and rescue drones) to take over tasks that have been performed by humans. Still, it will be humans that interact with these systems relying on the system ('s decisions) will require trust in the robot/machine and its algorithms. Trust research has a long history. One dimension of trust, ethical or morally acceptable decisions, has not received much attention so far. Humans are continuously faced with ethical decisions, reached based on a personal value system and intuition. In order for people to be able to trust a system, it must have widely accepted ethical capabilities. Although some studies indicate that people prefer utilitarian decisions in critical situations, e.g. when a decision requires to favor one person over another, this approach would violate laws and international human rights as individuals must not be ranked or classified by personal characteristics. One solution to this dilemma would be to make decisions by chance - but what about acceptance by system users? To find out if randomized decisions are accepted by humans in morally ambiguous situations, we conducted an online survey where subjects had to rate their personal attitudes toward decisions of moral algorithms in different scenarios. Our results (n=330) show that, despite slightly more respondents state preferring decisions based on ethical rules, randomization is perceived to be most just and morally right and thus may drive decisions in case other objective parameters equate.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要