GIN : High-Performance, Scalable Inference for Graph Neural Networks

semanticscholar(2020)

引用 0|浏览4
暂无评分
摘要
Deep learning models have enjoyed tremendous success when applying to low-dimensional regular grid data such as image, video, and speech. Recently, graph neural networks (GNNs) have been proposed to learn from high-dimensional graph-structured data (e.g., social networks, molecular structures, and protein networks). Unfortunately, existing systems that are developed for the construction, training, and deployment of GNN models suffer from poor performance, especially when running on big graphs that exceed the size of the on-board DRAMs of computation accelerators such as GPUs. In this paper, we present Gin, a new computational framework that is able to generate highly efficient compute kernels for GNN inference. Specifically, Gin enables a user to continue to use a familiar deep learning framework (e.g., TensorFlow) as the front end, while utilizing a translator to translate the high-level representation of a GNN model into low-level codes. The back end in Gin will compile the translated code and create the optimized kernels on CPU. Our evaluation shows that Gin outperforms the state-of-art systems by up to three orders of magnitude, significantly accelerating the inference on billion-edge graphs.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要