Why Should I Trust This Item? Explaining the Recommendations of any Model

2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA)(2020)

引用 15|浏览8
暂无评分
摘要
Explainable AI has received a lot of attention over the past decade, with the proposal of many methods explaining black box classifiers such as neural networks. Despite the ubiquity of recommender systems in the digital world, only few researchers have attempted to explain their functioning, whereas it raises e.g., ethical issues. Indeed, recommender systems direct user choices to a large extent and their impact is important as they give access to only a small part of the range of items (e.g., products and/or services), as the submerged part of the iceberg. Consequently, they limit access to other resources. The potentially negative effects of these systems have been pointed out as phenomena like echo chambers and winner-take-all effects, because the internal logic of these systems is to likely enclose the consumer in a "dej́ a vu" loop. Therefore, it is crucial to provide explanations' of such recommender systems and to identify the user data that led the system to make a specific recommendation. This makes it possible to evaluate recommender systems not only regarding their efficiency (i.e., their capability to recommend an item that was actually chosen by the user), but also w.r.t. the diversity, relevance and timeliness of the active data used to make the recommendation. In this paper, we propose a deep analysis of 7 state-of-the-art models learnt on 6 datasets based on the identification of the items or the sequences of items actively used by the models. The proposed method, which is based on subgroup discovery with different pattern languages (i.e., itemsets and sequences), provides interpretable explanations of the recommendations - useful to compare different models and explain the reasons behind the recommendation to the user.
更多
查看译文
关键词
Recommender systems,Explainable AI,Subgroup discovery
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要