Characterizing the Contribution of Dependent Features in XAI Methods.

IEEE journal of biomedical and health informatics(2024)

引用 0|浏览23
暂无评分
摘要
Explainable Artificial Intelligence (XAI) provides tools to help understanding how AI models work and reach a particular decision or outcome. It helps to increase the interpretability of models and makes them more trustworthy and transparent. In this context, many XAI methods have been proposed to make black-box and complex models more digestible from a human perspective. However, one of the main issues that XAI methods have to face especially when dealing with a high number of features is the presence of multicollinearity, which casts shadows on the robustness of the XAI outcomes, such as the ranking of informative features. Most of the current XAI methods either do not consider the collinearity or assume the features are independent which, in general, is not necessarily true. Here, we propose a simple, yet useful, proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the features, and to reveal their impact on the outcome. The proposed method was applied to SHAP, as an example of XAI method which assume that the features are independent. For this purpose, several models were exploited for a well-known classification task (males versus females) using nine cardiac phenotypes extracted from cardiac magnetic resonance imaging as features. Principal component analysis and biological plausibility were employed to validate the proposed method. Our results showed that the proposed proxy could lead to a more robust list of informative features compared to the original SHAP in presence of collinearity.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要