Relative Effects of Positive and Negative Explanations on Satisfaction and Performance in Human-Agent Teams.

Bryan Lavender,Sami Abuhaimed,Sandip Sen

FLAIRS(2023)

引用 0|浏览7
暂无评分
摘要
Improving agent capabilities and increasing availability of computing platforms and Internet connectivity allows for more effective and diverse collaboration between human users and automated agents. To increase the viability and effectiveness of human-agent collaborative teams, there is a pressing need for research enabling such teams to maximally leverage relative strengths of human and automated reasoners. We study virtual and ad-hoc teams, comprising a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from given task types. Team members are initially unaware of the capabilities of their partners, and the agent, acting as the task allocator, has to adapt the allocation process to maximize team performance. The focus of the current paper is on analyzing how allocation decision explanations can affect both user performance and the human workers' outlook including factors such as motivation and satisfaction. We investigate the effect of explanations provided by the agent allocator to the human on performance and key factors reported by the human teammate on surveys. Survey factors include the effect of explanations on motivation, explanatory power, and understandability, as well as satisfaction with and trust / confidence in the teammate. We evaluated a set of hypotheses on these factors related to positive, negative, and no-explanation scenarios through experiments conducted with MTurk workers.
更多
查看译文
关键词
negative explanations,satisfaction,positive,teams,human-agent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要