Alternating Decision Forests

Computer Vision and Pattern Recognition(2013)

引用 83|浏览4
暂无评分
摘要
This paper introduces a novel classification method termed Alternating Decision Forests (ADFs), which formulates the training of Random Forests explicitly as a global loss minimization problem. During training, the losses are minimized via keeping an adaptive weight distribution over the training samples, similar to Boosting methods. In order to keep the method as flexible and general as possible, we adopt the principle of employing gradient descent in function space, which allows to minimize arbitrary losses. Contrary to Boosted Trees, in our method the loss minimization is an inherent part of the tree growing process, thus allowing to keep the benefits of common Random Forests, such as, parallel processing. We derive the new classifier and give a discussion and evaluation on standard machine learning data sets. Furthermore, we show how ADFs can be easily integrated into an object detection application. Compared to both, standard Random Forests and Boosted Trees, ADFs give better performance in our experiments, while yielding more compact models in terms of tree depth.
更多
查看译文
关键词
image classification method,tree depth,parallel processing,decision forests,training sample,boosted trees,object detection application,adaptive weight distribution,adfs,loss minimization,novel classification method,learning (artificial intelligence),gradient descent method,standard random forests,boosting method,random forests,arbitrary loss,boosting,global loss minimization problem,alternating decision forests,image classification,machine learning datasets,gradient methods,boosting methods,object detection,global loss,arbitrary losses minimization,minimisation,common random forests,tree growing process,entropy,vegetation,learning artificial intelligence,minimization,decision trees
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要