INS-GNN: Improving graph imbalance learning with self-supervision.

Inf. Sci.(2023)

引用 2|浏览45
暂无评分
摘要
Graph Neural Networks (GNNs) have achieved tremendous success in various applications, such as node classification, link prediction and graph classification. However, graph-structured data is usually imbalanced in many real-world scenarios. When trained on an imbalanced dataset, the performance of GNNs is distant from satisfactory for nodes of minority classes. Due to the small population, these minority nodes have less engagement in the objective function of training and the message passing mechanism behind GNNs exacerbates this problem further, as the information from minority nodes can be overwhelmed by majority nodes in the process of information propagation. To tackle the problems of imbalanced node classification based on GNNs, the most effective way is to promote the engagement of minority nodes during propagation. Therefore, inspired by self-supervised learning on exploring useful information from unlabeled data samples, we propose a novel model-agnostic framework in this paper, named INS-GNN, which tackles the imbalanced node classification problem. Specifically, self-supervised pre-training is firstly used as a pretext task to pre-train the model without label information, which can effectively alleviate the phenomenon that imbalanced labels naturally impose label bias during learning. Then, self-training is exploited to assign pseudo-labels to unlabeled nodes and conduct self-supervised learning to help model training. It can effectively alleviate the problem that obtaining numerous annotation nodes is time-consuming and resource-costly. Finally, self-supervised edge augmentation is utilized to change the structure information of the minority nodes for enhancing the contribution of minority nodes. Experimental results on various real-world datasets demonstrate INS-GNN is able to achieve state-of-the-art performance on solving the imbalanced node classification problem.
更多
查看译文
关键词
Imbalanced node classification,Graph neural networks,Pre-training,Self-training,Self-supervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要