Accelerating Graph Neural Network Training on ReRAM-Based PIM Architectures via Graph and Model Pruning

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2023)

引用 0|浏览7
暂无评分
摘要
Graph neural networks (GNNs) are used for predictive analytics on graph-structured data, and they have become very popular in diverse real-world applications. Resistive random-access memory (ReRAM)-based PIM architectures can accelerate GNN training. However, GNN training on ReRAM-based architectures is both compute- and data intensive in nature. In this work, we propose a framework called SlimGNN that synergistically combines both graph and model pruning to accelerate GNN training on ReRAM-based architectures. The proposed framework reduces the amount of redundant information in both the GNN model and input graph(s) to streamline the overall training process. This enables fast and energy-efficient GNN training on ReRAM-based architectures. Experimental results demonstrate that using this framework, we can accelerate GNN training by up to $ {4}. {5} {\times }$ while using $ {6}. {6} {\times }$ less energy compared to the unpruned counterparts.
更多
查看译文
关键词
graph neural network training,pim architectures,model pruning,reram-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要