Advanced linguistic explanations of classifier decisions for users' annotation support

2016 IEEE 8th International Conference on Intelligent Systems (IS)(2016)

引用 5|浏览11
暂无评分
摘要
We propose several new concepts for providing enhanced explanations of classifier decisions in linguistic (human readable) form. These are intended to help operators to better understand the decision process and support them during sample annotation to improve their certainty and consistency in successive labeling cycles. This is expected to lead to better, more consistent data sets (streams) for use in training and updating classifiers. The enhanced explanations are composed of 1) grounded reasons for classification decisions, represented as linguistically readable fuzzy rules, 2) a classifier's level of uncertainty in relation to its decisions and possible alternative suggestions, 3) the degree of novelty of current samples and 4) the levels of impact of the input features on the current classification response. The last of these are also used to reduce the lengths of the rules to a maximum of 3 to 4 antecedent parts to ensure readability for operators and users. The proposed techniques were embedded within an annotation GUI and applied to a real-world application scenario from the field of visual inspection. The usefulness of the proposed linguistic explanations was evaluated based on experiments conducted with six operators. The results indicate that there is approximately an 80% chance that operator/ user labeling behavior improves significantly when enhanced linguistic explanations are provided, whereas this chance drops to 10% when only the classifier responses are shown.
更多
查看译文
关键词
linguistic explanation of classifier decisions,operators' labeling and annotation behavior,classification reasons,transparent fuzzy rules,classifier certainty,degree of novelty,instance-based feature importance levels
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要