SparGNN: Efficient Joint Feature-Model Sparsity Exploitation in Graph Neural Network Acceleration

2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)(2024)

引用 0|浏览3
暂无评分
摘要
With the rapid explosion in both graph scale and model size, accelerating graph neural networks (GNNs) at scale encounters significant pressure on computation and memory footprint. Exploiting data sparsity with pruning, which exhibits remarkable effect in deep neural networks (DNNs), while still lags behind in GNN acceleration. This is because costly pruning overhead upon large graphs and inefficient hardware support will eclipse the benefit of GNN sparsification. To this end, this paper proposes SparGNN, an algorithm and accelerator co-design that can efficiently exploit data sparsity in both features and models to speedup GNN acceleration while reserving its accuracy. In algorithm, to reduce the overhead of iterative pruning, we distill a sparsified subgraph to substitute the original input graph for pruning, which can low-costly excavate the potential data sparsity in both features and models without accuracy compromise. In hardware, to improve data locality of the sparsified feature-weight multiplication, we design compressed row-/column-wise product dataflow for efficient feature updating. We then propose lightweight hardware changes to make our design applicable to conventional GNN accelerators. The experimental results show that compared to the state-of-the-art GNN accelerators, SparGNN reduces $1.5 \sim 4.3 \times$ computation and gains an average of 1.8 6.8 $\times$ speedup with $1.4 \sim 9.2 \times$ energy efficiency improvement.
更多
查看译文
关键词
Neural Network,Efficient Use,Graph Neural Networks,Sparse Use,Deep Neural Network,Model Size,Sparse Data,Graphical Model,Improve Energy Efficiency,Large Graphs,Input Graph,Hardware Changes,Artificial Neural Network,Weight Matrix,Row Vector,Model Weights,Matrix Multiplication,Sparse Matrix,Feature Matrix,Nonzero Elements,Graph Neural Network Model,Stages Of Aggregation,Index Of Coordination,Index Of Element,Intermediate Features,Element In Row,Column Index,Sparse Model,Balanced Tree,Graph Convolution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要