Interpretability in Graph Neural Networks

Graph Neural Networks: Foundations, Frontiers, and Applications(2022)

引用 13|浏览25
暂无评分
摘要
Interpretable machine learning, or explainable artificial intelligence, is experiencing rapid developments to tackle the opacity issue of deep learning techniques. In graph analysis, motivated by the effectiveness of deep learning, graph neural networks (GNNs) are becoming increasingly popular in modeling graph data. Recently, an increasing number of approaches have been proposed to provide explanations for GNNs or to improve GNN interpretability. In this chapter, we offer a comprehensive survey to summarize these approaches. Specifically, in the first section, we review the fundamental concepts of interpretability in deep learning. In the second section, we introduce the post-hoc explanation methods for understanding GNN predictions. In the third section, we introduce the advances of developing more interpretable models for graph data. In the fourth section, we introduce the datasets and metrics for evaluating interpretation. Finally, we point out future directions of the topic. 7.1 Background: Interpretability in Deep Models Deep learning has become an indispensable tool for a wide range of applications such as image processing, natural language processing, and speech recognition. Despite the success, deep models have been criticized as “black boxes” due to their complexity in processing information and making decisions. In this section, we introduce the research background of interpretability in deep models, including the Ninghao Liu Department of CSE, Texas A&M University, e-mail: nhliu43@tamu.edu Qizhang Feng Department of CSE, Texas A&M University, e-mail: qf31@tamu.edu Xia Hu Department of CSE, Texas A&M University, e-mail: xiahu@tamu.edu
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要