谷歌Chrome浏览器插件
订阅小程序
在清言上使用

Inducing Readable Oblique Decision Trees

2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)(2018)

引用 2|浏览1
暂无评分
摘要
Although machine learning models are found in more and more practical applications, stakeholders can be suspicious about the fact that they are not hard-coded and fully specified. To foster trust, it is crucial to provide models whose predictions are explainable. Decision Trees can be understood by humans if they are simple enough, but they suffer in accuracy when compared to other common machine learning methods. Oblique Decision Trees can provide better accuracy and smaller trees, but their decision rules are more complex. This article presents MUST (Multivariate Understandable Statistical Tree), an Oblique Decision Tree split algorithm based on Linear Discriminant Analysis that aims to preserve explainability by limiting the number of variables that appear in decision rules.
更多
查看译文
关键词
Oblique Decision Tree,Decision trees,Explainable AI,Linear discriminant analysis,Machine Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要