Evolving Gradient Boost: A Pruning Scheme Based on Loss Improvement Ratio for Learning Under Concept Drift

IEEE TRANSACTIONS ON CYBERNETICS(2023)

引用 10|浏览56
暂无评分
摘要
In nonstationary environments, data distributions can change over time. This phenomenon is known as concept drift, and the related models need to adapt if they are to remain accurate. With gradient boosting (GB) ensemble models, selecting which weak learners to keep/prune to maintain model accuracy under concept drift is nontrivial research. Unlike existing models such as AdaBoost, which can directly compare weak learners' performance by their accuracy (a metric between [0, 1]), in GB, weak learners' performance is measured with different scales. To address the performance measurement scaling issue, we propose a novel criterion to evaluate weak learners in GB models, called the loss improvement ratio (LIR). Based on LIR, we develop two pruning strategies: 1) naive pruning (NP), which simply deletes all learners with increasing loss and 2) statistical pruning (SP), which removes learners if their loss increase meets a significance threshold. We also devise a scheme to dynamically switch between NP and SP to achieve the best performance. We implement the scheme as a concept drift learning algorithm, called evolving gradient boost (LIR-eGB). On average, LIR-eGB delivered the best performance against state-of-the-art methods on both stationary and nonstationary data.
更多
查看译文
关键词
Bagging,Boosting,Heuristic algorithms,Data models,Adaptation models,Training,Australia,Concept drift,data stream,decision tree,ensemble learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要