Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice.

CD-MAKE(2023)

引用 0|浏览6
暂无评分
摘要
Augmented Intelligence (AuI) refers to the use of artificial intelligence (AI) to amplify certain cognitive tasks performed by human decision-makers. However, there are concerns that AI’s increasing capability and alignment with human values may undermine user agency, autonomy, and responsible decision-making. To address these concerns, we conducted a user study in the field of orthopedic radiology diagnosis, introducing a reflective XAI (explainable AI) support that aimed to stimulate human reflection, and we evaluated its impact of in terms of decision performance, decision confidence and perceived utility. Specifically, the reflective XAI support system prompted users to reflect on the dependability of AI-generated advice by presenting evidence both in favor of and against its recommendation. This evidence was presented via two cases that closely resembled a given base case, along with pixel attribution maps. These cases were associated with the same AI advice for the base case, but one case was accurate while the other was erroneous with respect to the ground truth. While the introduction of this support system did not significantly enhance diagnostic accuracy, it was highly valued by more experienced users. Based on the findings of this study, we advocate for further research to validate the potential of reflective XAI in fostering more informed and responsible decision-making, ultimately preserving human agency.
更多
查看译文
关键词
ai advice,explanations feeding doubts
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要