Classification of Gastrointestinal Cancer through Explainable AI and Ensemble Learning

2023 Sixth International Conference of Women in Data Science at Prince Sultan University (WiDS PSU)(2023)

引用 0|浏览13
暂无评分
摘要
Despite the effectiveness of AI-assisted cancer detection systems, acquiring authorisation for their deployment in clinical settings has proven challenging owing to their inadequate level of explainability in their underlying mechanisms. Due to the lack of transparency in AI-driven systems’ decision-making, many medical practitioners are still reluctant to employ AI-assisted diagnoses. Explainable Artificial Intelligence (XAI) is a new topic in AI that has the potential to solve the computational black boxes that AI systems posed, allowing for an explanation of a model prediction. Consequently, in this research work, we have explored the Shapley Additive explanations (SHAP), which is a model prediction explanation approach on our developed ensemble model based on the averaging techniques by concatenating the predictions of three convolutional neural network (InceptionV3, Inception-ResNetV2, and VGG16). The models were trained on the pathological findings of Kvasir-V2 datasets and have achieved an accuracy of 93.17% and F1-scores of 97%. Post-training of the ensemble model, images from the three classes were analysed using SHAP to provide an explainability on their deterministic features. The results obtained indicate a favourable and optimistic improvement in the investigation of XAI approaches in healthcare more specifically in the detection of gastrointestinal cancer.
更多
查看译文
关键词
Gastrointestinal cancer,Endoscopic images,Ensemble model,Explainable AI,SHAP
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要