Black-box Adversarial Attack and Defense on Graph Neural Networks

2022 IEEE 38th International Conference on Data Engineering (ICDE)(2022)

引用 12|浏览81
暂无评分
摘要
Graph neural networks (GNNs) have achieved great success on various graph tasks. However, recent studies have re-vealed that GNNs are vulnerable to adversarial attacks, including topology modifications and feature perturbations. Regardless of the fruitful progress, existing attackers require node labels and GNN parameters to optimize a bi-level problem, or cannot cover both topology modifications and feature perturbations, which are not practical, efficient, or effective. In this paper, we propose a black-box attacker PEEGA, which is restricted to access node features and graph topology for practicability. Specifically, we propose to measure the negative impact of various adversarial attacks from the perspective of node representations, thereby we formulate a single-level problem that can be efficiently solved. Furthermore, we observe that existing attackers tend to blur the context of nodes through adding edges between nodes with different labels. As a result, GNNs are unable to recognize nodes. Based on this observation, we propose a GNN defender GNAT, which incorporates three augmented graphs, i.e., a topology graph, a feature graph, and an ego graph, to make the context of nodes more distinguishable. Extensive experiments on three real-world datasets demonstrate the effectiveness and efficiency of our proposed attacker, despite the fact that we do not access node labels and GNN parameters. Moreover, the effectiveness and efficiency of our proposed defender are also validated by substantial experiments.
更多
查看译文
关键词
Graph Neural Network,Adversarial Attack,Graph Defense
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要