User Trust on an Explainable AI-based Medical Diagnosis Support System

arxiv(2022)

引用 0|浏览7
暂无评分
摘要
Recent research has supported that system explainability improves user trust and willingness to use medical AI for diagnostic support. In this paper, we use chest disease diagnosis based on X-Ray images as a case study to investigate user trust and reliance. Building off explainability, we propose a support system where users (radiologists) can view causal explanations for final decisions. After observing these causal explanations, users provided their opinions of the model predictions and could correct explanations if they did not agree. We measured user trust as the agreement between the model's and the radiologist's diagnosis as well as the radiologists' feedback on the model explanations. Additionally, they reported their trust in the system. We tested our model on the CXR-Eye dataset and it achieved an overall accuracy of 74.1%. However, the experts in our user study agreed with the model for only 46.4% of the cases, indicating the necessity of improving the trust. The self-reported trust score was 3.2 on a scale of 1.0 to 5.0, showing that the users tended to trust the model but the trust still needs to be enhanced.
更多
查看译文
关键词
medical diagnosis,trust,ai-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要