Graph Topology Noise Aware Learning by Feature Clustering and Pseudo-labels Generator

2022 International Joint Conference on Neural Networks (IJCNN)(2022)

Cited 1|Views14
No score
Abstract
Graph Convolutional Networks (GCNs) and their variants have achieved impressive performance in a wide range of graph-based tasks. For graph data, both feature information and structural information play a crucial role. Most GCNs update the node representation by aggregating the information from neighbors. However, the structural information may contain noise, which may mislead the downstream tasks. Hence, a new graph topology optimization method for the semi-supervised node classification tasks, GTNACP is proposed to improve the quality of structural information. The core idea of our method is to filter the structural information to be optimized by comparing the difference between the clustering results of the input data and the pseudo-label values obtained from pre-training. Due to the introduction of pseudo-labels with noise, instead of fully confiding in the generated labeled set, we design new loss functions as measurements of their confidence. In this way, GTNACP can alleviate the impact of incorrect pseudo-labels. Moreover, we experimentally find that deleting or adding edges directly by error can irreversibly degrade the performance. In order to alleviate such negative impact, GTNACP adopts an edge modification method based on node similarity and clustering performance. Our experiments verify that GTNACP can be easily combined with traditional GCNs and outperform baseline models in various semi-supervised node classification tasks, and to some extent, can effectively mitigate over-smoothing.
More
Translated text
Key words
Graph neural networks,Graph topology optimization,Semi-supervised node classification
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined