Chrome Extension
WeChat Mini Program
Use on ChatGLM

PCGraph: Accelerating GNN Inference on Large Graphs via Partition Caching

2021 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)(2021)

Cited 3|Views15
No score
Abstract
Graph neural networks (GNNs) have been emerging as powerful learning tools for unstructured data and successfully applied to many graph-based application domains. Sampling-based GNN inference is commonly adopted in existing graph learning frameworks to handle large-scale graphs. However, this approach is restricted by the problems of redundant vertex embedding computation in GPU and inefficient lo...
More
Translated text
Key words
graph neural networks,inference,embedding computation,feature caching,pipeline parallel
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined