Whom to Trust, How and Why: Untangling Artificial Intelligence Ethics Principles, Trustworthiness, and Trust

IEEE INTELLIGENT SYSTEMS(2023)

引用 0|浏览1
暂无评分
摘要
In this article, we present an overview of the literature on trust in artificial intelligence (AI) and AI trustworthiness and argue for distinguishing these concepts more clearly and gathering more empirically evidence on what contributes to people's trusting behaviors. We discuss that trust in AI involves not only reliance on the system itself but also trust in the system's developers. AI ethics principles such as explainability and transparency are often assumed to promote user trust, but empirical evidence of how such features actually affect how users perceive the system's trustworthiness is not as abundant or not that clear. AI systems should be recognized as sociotechnical systems, where the people involved in designing, developing, deploying, and using the system are as important as the system for determining whether it is trustworthy. Without recognizing these nuances, "trust in AI" and "trustworthy AI" risk becoming nebulous terms for any desirable feature for AI systems.
更多
查看译文
关键词
Artificial intelligence,Ethics,Intelligent systems,Control systems,Stakeholders,Automation,Training data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要