Leveraging Counterfactual Paths for Contrastive Explanations of POMDP Policies
CoRR(2024)
摘要
As humans come to rely on autonomous systems more, ensuring the transparency
of such systems is important to their continued adoption. Explainable
Artificial Intelligence (XAI) aims to reduce confusion and foster trust in
systems by providing explanations of agent behavior. Partially observable
Markov decision processes (POMDPs) provide a flexible framework capable of
reasoning over transition and state uncertainty, while also being amenable to
explanation. This work investigates the use of user-provided counterfactuals to
generate contrastive explanations of POMDP policies. Feature expectations are
used as a means of contrasting the performance of these policies. We
demonstrate our approach in a Search and Rescue (SAR) setting. We analyze and
discuss the associated challenges through two case studies.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要