Machine learning explainability via microaggregation and shallow decision trees

Knowledge-Based Systems(2020)

引用 41|浏览44
暂无评分
摘要
Artificial intelligence (AI) is being deployed in missions that are increasingly critical for human life. To build trust in AI and avoid an algorithm-based authoritarian society, automated decisions should be explainable. This is not only a right of citizens, enshrined for example in the European General Data Protection Regulation, but a desirable goal for engineers, who want to know whether the decision algorithms are capturing the relevant features. For explainability to be scalable, it should be possible to derive explanations in a systematic way. A common approach is to use simpler, more intuitive decision algorithms to build a surrogate model of the black-box model (for example a deep learning algorithm) used to make a decision. Yet, there is a risk that the surrogate model is too large for it to be really comprehensible to humans. We focus on explaining black-box models by using decision trees of limited depth as a surrogate model. Specifically, we propose an approach based on microaggregation to achieve a trade-off between the comprehensibility and the representativeness of the surrogate model on the one side and the privacy of the subjects used for training the black-box model on the other side.
更多
查看译文
关键词
Explainability,Machine learning,Data protection,Microaggregation,Privacy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要