What Does My GNN Really Capture? On Exploring Internal GNN Representations

European Conference on Artificial Intelligence (ECAI)(2022)

引用 2|浏览0
暂无评分
摘要
Graph Neural Networks (GNNs) are very efficient at classifying graphs but their internal functioning is opaque which limits their field of application. Existing methods to explain GNN focus on disclosing the relationships between input graphs and model decision. In this article, we propose a method that goes further and isolates the internal features, hidden in the network layers, that are automatically identified by the GNN and used in the decision process. We show that this method makes possible to know the parts of the input graphs used by GNN with much less bias that SOTA methods and thus to bring confidence in the decision process.
更多
查看译文
关键词
gnn really capture,internal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要