Performance Augmentation of Base Classifiers Using Adaptive Boosting Framework for Medical Datasets

APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING(2023)

引用 0|浏览1
暂无评分
摘要
This paper investigates the performance enhancement of base classifiers within the AdaBoost framework applied to medical datasets. Adaptive boosting (AdaBoost), being an instance of boosting, combines other classifiers to enhance their performance. We conducted a comprehensive experiment to assess the efficacy of twelve base classifiers with the AdaBoost framework, namely, Bayes network, decision stump, ZeroR, decision tree, Naive Bayes, J-48, voted perceptron, random forest, bagging, random tree, stacking, and AdaBoost itself. The experiments are carried out on five datasets from the medical domain based on various types of cancers, i.e., global cancer map (GCM), lymphoma-I, lymphoma-II, leukaemia, and embryonal tumours. The evaluation focuses on the accuracy, precision, and efficiency of the base classifiers in the AdaBoost framework. The results show that the performance of Naive Bayes, Bayes network, and voted perceptron is highly improved compared to the rest of the base classifiers, attaining accuracies as high as 94.74%, 97.78%, and 97.78%, respectively. The results also show that in most cases, the base classifiers perform better with AdaBoost compared to their performance, i.e., for voted perceptron, the accuracy is improved up to 13.34%.For bagging, it is improved by up to 7%. This research aims to identify such base classifiers with optimal boosting capabilities within the AdaBoost framework for medical datasets. The significance of these results is that they provide insight into the performance of the base classifiers when used in the boosting framework to enhance the classification performance of classifiers in scenarios where individual classifiers do not perform up to the mark.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要