Graph Topology Noise Aware Learning by Feature Clustering and Pseudo-labels Generator

2022 International Joint Conference on Neural Networks (IJCNN)(2022)

引用 1|浏览11
暂无评分
摘要
Graph Convolutional Networks (GCNs) and their variants have achieved impressive performance in a wide range of graph-based tasks. For graph data, both feature information and structural information play a crucial role. Most GCNs update the node representation by aggregating the information from neighbors. However, the structural information may contain noise, which may mislead the downstream tasks. Hence, a new graph topology optimization method for the semi-supervised node classification tasks, GTNACP is proposed to improve the quality of structural information. The core idea of our method is to filter the structural information to be optimized by comparing the difference between the clustering results of the input data and the pseudo-label values obtained from pre-training. Due to the introduction of pseudo-labels with noise, instead of fully confiding in the generated labeled set, we design new loss functions as measurements of their confidence. In this way, GTNACP can alleviate the impact of incorrect pseudo-labels. Moreover, we experimentally find that deleting or adding edges directly by error can irreversibly degrade the performance. In order to alleviate such negative impact, GTNACP adopts an edge modification method based on node similarity and clustering performance. Our experiments verify that GTNACP can be easily combined with traditional GCNs and outperform baseline models in various semi-supervised node classification tasks, and to some extent, can effectively mitigate over-smoothing.
更多
查看译文
关键词
Graph neural networks,Graph topology optimization,Semi-supervised node classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要