Where to Mask: Structure-Guided Masking for Graph Masked Autoencoders
arxiv(2024)
摘要
Graph masked autoencoders (GMAE) have emerged as a significant advancement in
self-supervised pre-training for graph-structured data. Previous GMAE models
primarily utilize a straightforward random masking strategy for nodes or edges
during training. However, this strategy fails to consider the varying
significance of different nodes within the graph structure. In this paper, we
investigate the potential of leveraging the graph's structural composition as a
fundamental and unique prior in the masked pre-training process. To this end,
we introduce a novel structure-guided masking strategy (i.e., StructMAE),
designed to refine the existing GMAE models. StructMAE involves two steps: 1)
Structure-based Scoring: Each node is evaluated and assigned a score reflecting
its structural significance. Two distinct types of scoring manners are
proposed: predefined and learnable scoring. 2) Structure-guided Masking: With
the obtained assessment scores, we develop an easy-to-hard masking strategy
that gradually increases the structural awareness of the self-supervised
reconstruction task. Specifically, the strategy begins with random masking and
progresses to masking structure-informative nodes based on the assessment
scores. This design gradually and effectively guides the model in learning
graph structural information. Furthermore, extensive experiments consistently
demonstrate that our StructMAE method outperforms existing state-of-the-art
GMAE models in both unsupervised and transfer learning tasks. Codes are
available at https://github.com/LiuChuang0059/StructMAE.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要