Simple Graph Condensation
arxiv(2024)
摘要
The burdensome training costs on large-scale graphs have aroused significant
interest in graph condensation, which involves tuning Graph Neural Networks
(GNNs) on a small condensed graph for use on the large-scale original graph.
Existing methods primarily focus on aligning key metrics between the condensed
and original graphs, such as gradients, distribution and trajectory of GNNs,
yielding satisfactory performance on downstream tasks. However, these complex
metrics necessitate intricate computations and can potentially disrupt the
optimization process of the condensation graph, making the condensation process
highly demanding and unstable. Motivated by the recent success of simplified
models in various fields, we propose a simplified approach to metric alignment
in graph condensation, aiming to reduce unnecessary complexity inherited from
GNNs. In our approach, we eliminate external parameters and exclusively retain
the target condensed graph during the condensation process. Following the
hierarchical aggregation principles of GNNs, we introduce the Simple Graph
Condensation (SimGC) framework, which aligns the condensed graph with the
original graph from the input layer to the prediction layer, guided by a
pre-trained Simple Graph Convolution (SGC) model on the original graph. As a
result, both graphs possess the similar capability to train GNNs. This
straightforward yet effective strategy achieves a significant speedup of up to
10 times compared to existing graph condensation methods while performing on
par with state-of-the-art baselines. Comprehensive experiments conducted on
seven benchmark datasets demonstrate the effectiveness of SimGC in prediction
accuracy, condensation time, and generalization capability. Our code will be
made publicly available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要