Communicating Missing Causal Information to Explain a Robot’s Past Behavior

ACM Transactions on Human-Robot Interaction(2023)

引用 1|浏览44
暂无评分
摘要
Robots need to explain their behavior to gain trust. Existing research has focused on explaining a robot’s current behavior, yet it remains unknown yet challenging how to provide explanations of past actions in an environment that might change after a robot’s actions, leading to critical missing causal information due to moved objects. We conducted an experiment (N = 665) investigating how a robot could help participants infer the missing causal information by replaying the past behavior physically, using verbal explanations, and projecting visual information onto the environment. Participants watched videos of the robot replaying its completion of an integrated mobile kitting task. During the replay, the objects are already gone, so participants needed to infer where an object was picked, where a ground obstacle had been, and where the object was placed. Based on the results, we recommend combining physical replay with speech and projection indicators (Replay-Project-Say) to help infer all the missing causal information (picking, navigation, and placement) from the robot’s past actions. This condition had the best outcome in both task-based—effectiveness, efficiency, and confidence—and team-based metrics—workload and trust. If one’s focus is efficiency, then we recommend projection markers for navigation inferences and verbal markers for placing inferences.
更多
查看译文
关键词
Robot explanation,behavior explanation,system transparency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要