The Alternating Decision Tree Learning Algorithm

ICML '99: Proceedings of the Sixteenth International Conference on Machine Learning(1999)

引用 1230|浏览488
暂无评分
摘要
The application of boosting procedures to decision tree algorithms has been shown to produce very accurate classifiers. These classifiers are in the form of a majority vote over a number of decision trees. Unfortunately, these classifiers are often large, complex and difficult to interpret. This paper describes a new type of classification rule, the alternating decision tree, which is a generalization of decision trees, voted decision trees and voted decision stumps. At the same time classifiers of this type are relatively easy to interpret. We present a learning algorithm for alternating decision trees that is based on boosting. Experimental results show it is competitive with boosted decision tree algorithms such as C5.0, and generates rules that are usually smaller in size and thus easier to interpret. In addition these rules yield a natural measure of classification confidence which can be used to improve the accuracy at the cost of abstaining from predicting examples that are hard to classify.
更多
查看译文
关键词
Alternating Decision Tree Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要