Effects of Explanation Strategy and Autonomy of Explainable AI on Human–AI Collaborative Decision-making

International Journal of Social Robotics(2024)

引用 0|浏览4
暂无评分
摘要
This study examined the effects of explanation strategy (global explanation vs. deductive explanation vs. contrastive explanation) and autonomy level (high vs. low) of explainable agents on human–AI collaborative decision-making. A 3 × 2 mixed-design experiment was conducted. The decision-making task was a modified Mahjong game. Forty-eight participants were divided into three groups, each collaborating with an agent with a different explanation strategy. Each agent had two autonomy levels. The results indicated that global explanation incurred the lowest mental workload and highest understandability. Contrastive explanation required the highest mental workload but incurred the highest perceived competence, affect-based trust, and social presence. Deductive explanation was found to be the worst in terms of social presence. The high-autonomy agents incurred lower mental workload and interaction fluency but higher faith and social presence than the low-autonomy agents. The findings of this study can help practitioners in designing user-centered explainable decision-support agents and choosing appropriate explanation strategies for different situations.
更多
查看译文
关键词
Autonomy,Decision-making,Explainability,Human–AI interaction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要