High-Performance and Programmable Attentional Graph Neural Networks with Global Tensor Formulations.

Maciej Besta,Pawel Renc, Robert Gerstenberger,Paolo Sylos Labini,Alexandros Nikolaos Ziogas, Tiancheng Chen,Lukas Gianinazzi, Florian Scheidl, Kalman Szenes, Armon Carigiet,Patrick Iff,Grzegorz Kwasniewski,Raghavendra Kanakagiri, Chio Ge, Sammy Jaeger,Jaroslaw Was,Flavio Vella,Torsten Hoefler

SC(2023)

引用 1|浏览8
暂无评分
摘要
Graph attention models (A-GNNs), a type of Graph Neural Networks (GNNs), have been shown to be more powerful than simpler convolutional GNNs (C-GNNs). However, A-GNNs are more complex to program and difficult to scale. To address this, we develop a novel mathematical formulation, based on tensors that group all the feature vectors, targeting both training and inference of A-GNNs. The formulation enables straightforward adoption of communication-minimizing routines, it fosters optimizations such as vectorization, and it enables seamless integration with established linear algebra DSLs or libraries such as GraphBLAS. Our implementation uses a data redistribution scheme explicitly developed for sparse-dense tensor operations used heavily in GNNs, and fusing optimizations that further minimize memory usage and communication cost. We ensure theoretical asymptotic reductions in communicated data compared to the established message-passing GNN paradigm. Finally, we provide excellent scalability and speedups of even 4--5x over modern libraries such as Deep Graph Library.
更多
查看译文
关键词
Graph Attention Models,Graph Neural Networks,Sparse-Dense Tensor Operations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要