Efficient ensembles of distance-based label ranking trees

EXPERT SYSTEMS(2024)

引用 0|浏览2
暂无评分
摘要
Ensemble of label ranking trees (LRTs) are currently the state-of-the-art approaches to the label ranking problem. Recently, bagging, boosting, and random forest methods have been proposed, all based on the LRT algorithm, which adapts regression/classification trees to the label classification problem. The LRT algorithm uses theoretically grounded Mallows probability distribution to select the best split when growing the tree, and an EM-type process to complete the rankings on the training data when they are incomplete. These two steps have proven to be accurate, but require a large computational effort. This article proposes two alternative methods that replace the use of the Mallows distribution with distance-based criteria to select the best split at each inner node of the tree. Moreover, these distance-based criteria allow dealing with incomplete rankings natively, so avoiding the completion process. We have carried out an extensive experimental evaluation, which shows that (1) the integration of the two proposed modifications to the LRT algorithm into ensemble methods (bagging and random forest) are an order of magnitude faster than using the original Mallows-based LRT algorithm; (2) ensembles using the proposed LRT methods are significantly more accurate in the presence of incomplete rankings, while they are at least as accurate in the complete case; and (3) the two modified LRT algorithms are also an order of magnitude faster than the Mallows-based LRT, while they are at least as accurate as the Mallows-based LRT on both complete and incomplete rankings.
更多
查看译文
关键词
ensemble methods,generalized Kendall distance,label ranking,machine learning,preference learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要