Self-explanatory error checking capability for classifier-based Decision Support Systems

2022 IEEE Latin American Conference on Computational Intelligence (LA-CCI)(2022)

Cited 0|Views10
No score
Abstract
The eXplainable Artificial Intelligence field emerged to solve, albeit partially, the need to explain opaque intelligent models. However, even when intelligent decision support systems employ explainability techniques, it is still up to the decision maker to inspect those explanations and to perceive any problems contained in them. This work proposes an approach to imbue classifier-based Decision Support Systems with the capability of self-detecting and explaining inference errors. The hypothesis here is that fostering such self-awareness might improve system use, aiding the Decision Maker in perceiving problems with proposed solutions and thus being able to select better choices. For the studied datasets, experimental results shown that the approach was effective in detecting over 60% of decision inference errors while improving the accuracy in 20% for the best cases, when correcting all detected errors.
More
Translated text
Key words
Decision Support Systems,eXplainable Artificial Intelligence,Error Aware Systems
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined