Justifying convolutional neural network with argumentation for explainability.

Informatica (Slovenia)(2023)

Cited 0|Views2
No score
Abstract
Convolutional neural network (CNN) has emerged as one of the most accurate methods for sentiment analysis, but it is largely uninterpretable, while case-based reasoning (CBR) is less accurate but o ff ers interpretable outputs in the form of arguments from analogy. This paper presents an approach to combine these two methods, CNN for accuracy and CBR for explainability, using an assumption-based argumentation (ABA) framework. Our approach focuses on justifying CNN outputs using analogous sentences from CBR, while ensuring that the combined process is argumentative and hence self-explainable. To demonstrate the proposal, we construct a CNN model M 1 and a CBR model M 2 for sentiment analysis using di ff erent subsets of a dataset of which the remaining part is used for testing and comparing these input models with combined models. For an input sentence, if M 1 and M 2 predict the same sentiment, then the analogous sentence, which M 2 finds, is used to explain the sentiment. If they give conflicting sentiments, a hybrid model M 3 determines which one should be followed using a system of strict rules that takes into account how assertive M 1 and M 2 are. Another hybrid model M 4 , which is implemented by an ABA framework, improves on M 3 by considering the probability distribution of the set of all labels from M 1 , and the second (or third) most similar sentences from M 2 . M 3 and M 4 preserve the accuracy of the CNN model (specifically, 88.32% and 88.28% in comparison with 87.59% accuracy of the CNN). They justify 69.95% and 74.53% of CNN outputs, respectively.
More
Translated text
Key words
convolutional neural network,argumentation,neural network
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined