To Trust or Not to Trust: Towards a novel approach to measure trust for XAI systems
CoRR(2024)
Abstract
The increasing reliance on Deep Learning models, combined with their inherent
lack of transparency, has spurred the development of a novel field of study
known as eXplainable AI (XAI) methods. These methods seek to enhance the trust
of end-users in automated systems by providing insights into the rationale
behind their decisions. This paper presents a novel approach for measuring user
trust in XAI systems, allowing their refinement. Our proposed metric combines
both performance metrics and trust indicators from an objective perspective. To
validate this novel methodology, we conducted a case study in a realistic
medical scenario: the usage of XAI system for the detection of pneumonia from
x-ray images.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined