GRANOLA: Adaptive Normalization for Graph Neural Networks
arxiv(2024)
摘要
In recent years, significant efforts have been made to refine the design of
Graph Neural Network (GNN) layers, aiming to overcome diverse challenges, such
as limited expressive power and oversmoothing. Despite their widespread
adoption, the incorporation of off-the-shelf normalization layers like
BatchNorm or InstanceNorm within a GNN architecture may not effectively capture
the unique characteristics of graph-structured data, potentially reducing the
expressive power of the overall architecture. Moreover, existing graph-specific
normalization layers often struggle to offer substantial and consistent
benefits. In this paper, we propose GRANOLA, a novel graph-adaptive
normalization layer. Unlike existing normalization layers, GRANOLA normalizes
node features by adapting to the specific characteristics of the graph,
particularly by generating expressive representations of its neighborhood
structure, obtained by leveraging the propagation of Random Node Features (RNF)
in the graph. We present theoretical results that support our design choices.
Our extensive empirical evaluation of various graph benchmarks underscores the
superior performance of GRANOLA over existing normalization techniques.
Furthermore, GRANOLA emerges as the top-performing method among all baselines
within the same time complexity of Message Passing Neural Networks (MPNNs).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要