Decision control and explanations in human-AI collaboration: Improving user perceptions and compliance.

Comput. Hum. Behav.(2023)

引用 5|浏览21
暂无评分
摘要
Human-AI collaboration has become common, integrating highly complex AI systems into the workplace. Still, it is often ineffective; impaired perceptions - such as low trust or limited understanding - reduce compliance with recommendations provided by the AI system. Drawing from cognitive load theory, we examine two techniques of human-AI collaboration as potential remedies. In three experimental studies, we grant users decision control by empowering them to adjust the system's recommendations, and we offer explanations for the system's reasoning. We find decision control positively affects user perceptions of trust and understanding, and improves user compliance with system recommendations. Next, we isolate different effects of providing explanations that may help explain inconsistent findings in recent literature: while explanations help reenact the system's reasoning, they also increase task complexity. Further, the effectiveness of providing an explanation depends on the specific user's cognitive ability to handle complex tasks. In summary, our study shows that users benefit from enhanced decision control, while explanations - unless appropriately designed for the specific user - may even harm user perceptions and compliance. This work bears both theoretical and practical implications for the management of human-AI collaboration.
更多
查看译文
关键词
Human-AI collaboration,Decision control,Explanations,User trust,User compliance,Task complexity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要