Convergent Approaches to AI Explainability for HEP Muonic Particles Pattern Recognition

Leandro Maglianella, Lorenzo Nicoletti,Stefano Giagu,Christian Napoli,Simone Scardapane

Computing and Software for Big Science(2023)

引用 0|浏览0
暂无评分
摘要
Neural networks are commonly defined as ‘black-box’ models, meaning that the mechanism describing how they give predictions and perform decisions is not immediately clear or even understandable by humans. Therefore, Explainable Artificial Intelligence (xAI) aims at overcoming such limitation by providing explanations to Machine Learning (ML) algorithms and, consequently, making their outcomes reliable for users. However, different xAI methods may provide different explanations, both from a quantitative and a qualitative point of view, and the heterogeneity of approaches makes it difficult for a domain expert to select and interpret their result. In this work, we consider this issue in the context of a high-energy physics (HEP) use-case concerning muonic motion. In particular, we explored an array of xAI methods based on different approaches, and we tested their capabilities in our use-case. As a result, we obtained an array of potentially easy-to-understand and human-readable explanations of models’ predictions, and for each of them we describe strengths and drawbacks in this particular scenario, providing an interesting atlas on the convergent application of multiple xAI algorithms in a realistic context.
更多
查看译文
关键词
Explainable Artificial Intelligence,High-energy physics,Saliency maps methods,Intrinsically interpretable Decision Trees,Tracing gradient descent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要