ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks
arxiv(2024)
摘要
Although Graph Neural Networks (GNNs) have exhibited the powerful ability to
gather graph-structured information from neighborhood nodes via various
message-passing mechanisms, the performance of GNNs is limited by poor
generalization and fragile robustness caused by noisy and redundant graph data.
As a prominent solution, Graph Augmentation Learning (GAL) has recently
received increasing attention. Among prior GAL approaches, edge-dropping
methods that randomly remove edges from a graph during training are effective
techniques to improve the robustness of GNNs. However, randomly dropping edges
often results in bypassing critical edges, consequently weakening the
effectiveness of message passing. In this paper, we propose a novel adversarial
edge-dropping method (ADEdgeDrop) that leverages an adversarial edge predictor
guiding the removal of edges, which can be flexibly incorporated into diverse
GNN backbones. Employing an adversarial training framework, the edge predictor
utilizes the line graph transformed from the original graph to estimate the
edges to be dropped, which improves the interpretability of the edge-dropping
method. The proposed ADEdgeDrop is optimized alternately by stochastic gradient
descent and projected gradient descent. Comprehensive experiments on six graph
benchmark datasets demonstrate that the proposed ADEdgeDrop outperforms
state-of-the-art baselines across various GNN backbones, demonstrating improved
generalization and robustness.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要