Learning node representations against perturbations

PATTERN RECOGNITION(2024)

引用 0|浏览55
暂无评分
摘要
Recent graph neural networks (GNN) has achieved remarkable performance in node representation learning. One key factor of GNN's success is the smoothness property on node representations. Despite this, most GNN models are fragile to the perturbations on graph inputs and could learn unreliable node representations. In this paper, we study how to learn node representations against perturbations in GNN. Specifically, we consider that a node representation should remain stable under slight perturbations on the input, and node representations from different structures should be identifiable, which two are termed as the stability and identifiability on node representations, respectively. To this end, we propose a novel model called Stability-Identifiability GNN Against Perturbations (SIGNNAP) that learns reliable node representations in an unsupervised manner. SIGNNAP formalizes the stability and identifiability by a contrastive objective and preserves the smoothness with existing GNN backbones. The proposed method is a generic framework that can be equipped with many other backbone models (e.g. GCN, GraphSage and GAT). Extensive experiments on six benchmarks under both transductive and inductive learning setups of node classification demonstrate the effectiveness of our method. Codes and data are available online: https://github.com/xuChenSJTU/SIGNNAP-master-online
更多
查看译文
关键词
Graph neural networks,Node representation learning,Smoothness,Stability,Identifiability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要