Explainable Artificial Intelligence Improves Human Decision-Making: Results from a Mushroom Picking Experiment at a Public Art Festival

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION(2023)

引用 3|浏览1
暂无评分
摘要
Explainable Artificial Intelligence (XAI) enables Artificial Intelligence (AI) to explain its decisions. This holds the promise of making AI more understandable to users, improving interaction, and establishing an adequate level of trust. We tested this claim in the high-risk task of AI-assisted mushroom hunting, where people had to decide whether a mushroom was edible or poisonous. In a between-subjects experiment, 328 visitors of an Austrian media art festival played a tablet-based mushroom hunting game while walking through a highly immersive artificial indoor forest. As part of the game, an artificially intelligent app analyzed photos of the mushrooms they found and recommended classifications. One group saw the AI's decisions only, while a second group additionally received attribution-based and example-based visual explanations of the AI's recommendation. The results show that participants with visual explanations outperformed participants without explanations in correct edibility assessments and pick-up decisions. This exhibition-based experiment thus replicated the decision-making results of a previous online study. However, unlike in the previous study, the visual explanations did not significantly affect levels of trust or acceptance measures. In a direct comparison, we consequently discuss the findings in terms of generalizability. Besides the scientific contribution, we discuss the direct impact of conducting XAI experiments in immersive art- and game-based environments in exhibition contexts on visitors and local communities by triggering reflection and awareness for psychological issues of human-AI interaction.
更多
查看译文
关键词
XAI,visual explanation,mushroom identification,trust calibration,conceptual replication
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要