Multi-Agent Strategy Explanations for Human-Robot Collaboration.
CoRR(2023)
摘要
As robots are deployed in human spaces, it's important that they are able to
coordinate their actions with the people around them. Part of such coordination
involves ensuring that people have a good understanding of how a robot will act
in the environment. This can be achieved through explanations of the robot's
policy. Much prior work in explainable AI and RL focuses on generating
explanations for single-agent policies, but little has been explored in
generating explanations for collaborative policies. In this work, we
investigate how to generate multi-agent strategy explanations for human-robot
collaboration. We formulate the problem using a generic multi-agent planner,
show how to generate visual explanations through strategy-conditioned landmark
states and generate textual explanations by giving the landmarks to an LLM.
Through a user study, we find that when presented with explanations from our
proposed framework, users are able to better explore the full space of
strategies and collaborate more efficiently with new robot partners.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要