Explainability of Image Semantic Segmentation Through SHAP Values.

ICPR Workshops (3)(2022)

引用 0|浏览7
暂无评分
摘要
The introduction of Deep Neural Networks in high-level applications is significantly increasing. However, the understanding of such model decisions by humans is not straightforward and may limit their use for critical applications. In order to address this issue, recent research work has introduced explanation methods, typically for classification and captioning. Nevertheless, for some tasks, explainability methods need to be developed. This includes image segmentation that is an essential component for many high-level applications. In this paper, we propose a general workflow allowing for the adaptation of a state of the art explainability methods, especially SHAP, to image segmentation tasks. The approach allows for explanation of single pixels as well image areas. We show the relevance of the approach on a critical application such as oil slick pollution detection on the sea surface. We also show the applicability of the method on a more standard multimedia domain semantic segmentation task. The conducted experiments highlight the relevant features on which the models derive their local results and help identify general model behaviours.
更多
查看译文
关键词
image semantic segmentation,explainability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要