CSSE - An agnostic method of counterfactual, selected, and social explanations for classification models.

Expert Syst. Appl.(2023)

引用 0|浏览1
暂无评分
摘要
In some contexts, achieving high predictive capability may be sufficient for a machine learning model. However, in many scenarios, it is necessary to understand the model’s decisions to increase confidence in the predictions and direct the actions to be taken based on them. Therefore, it is essential to provide interpretable models. However, some authors have pointed out the need to improve current interpretability methods to provide adequate explanations, especially for non-specialists in machine learning. The solution is to expand studies beyond computational issues to understand better how people receive explanations. Based on the literature, we identified three aspects to be considered in the explanations: contrastive, selected, and social. The counterfactual approach, contrastive in nature, inform the user of how the decision by the model can be altered through minimal changes to the input features. Given this, we introduce the Agnostic Method of Counterfactual, Selected, and Social Explanations (CSSE), capable of generating local explanations for classification models using a genetic algorithm. Thus, as contributions, we highlight that the CSSE offers counterfactual explanations from learning models, presents explanations with diversity, without prolixity, and allows the user to restrict the features that appear in the explanation (actionability), besides other parameterization options for the user to communicate their preferences. A particular novelty of our work is the possibility for the user to adjust the importance he will give to sparsity (minimum number of changes) or similarity (minimizing the distance). Furthermore, we indicate other possibilities for the actionability functionality, inherently used to lock immutable features, allowing users to block features according to their interests or expertise. These resources can help the user obtain explanations more targeted to their objective and advance further in interpretability, considering computational and social aspects in generating explanations. The experiments showed that CSSE presents relevant results compared to some existing approaches. The work also includes a case study of predicting the academic performance of children and adolescents with ADHD, in which we applied the CSSE. Thus, the proposed method advances interpretability by offering explanations aimed at the end user, which can generate greater acceptance, confidence, and understanding regarding the models’ decisions. The method implementation is available at https://codeocean.com/capsule/7060371/tree.
更多
查看译文
关键词
Counterfactual explanations, Explainable artificial intelligence, Interpretability, Machine learning, Genetic algorithm, Classification, ADHD
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要