Algorithm/Hardware Co-Optimization for Sparsity-Aware SpMM Acceleration of GNNs.

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2023)

引用 0|浏览2
暂无评分
摘要
In recent years, graph neural networks (GNNs) have achieved impressive performance in various application fields by extracting information from graph-structured data. It contains extensive feature aggregation operations and has become a performance bottleneck, which can be abstracted as a specialized sparse-dense matrix multiplication (SpMM) operation. Previous works have leveraged the inner product or outer product to accelerate the feature aggregation process. However, inefficient execution leads to extremely unbalanced workloads and extensive intermediate data, hampering the performance of previous processors. So in this article, we demonstrate an algorithm/hardware co-optimization chance to enhance SpMM acceleration for GNNs. First, the algorithm part develops a dataflow-efficient SpMM algorithm that integrates three optimization methods to mitigate computation and memory access inefficiencies. Specifically, 1) the proposed equal-value partition method achieves fine-grained data partition and enables load balancing during data movement; 2) after observing the vertex aggregation phenomenon, a vertex-clustering optimization method is presented to enable significant data locality; and 3) the adaptive dataflow based on Gustavson’s algorithm is further implemented to enable the efficient distribution of sparse elements and improves computing resource utilization. Then, the hardware part features the proposed SpMM algorithm and customizes SDMA, a flexible and efficient accelerator to boost SpMM acceleration, which follows the adaptive dataflow to eliminate sparsity and explore the regular parallelism dimension. Finally, we prototype SDMA on the Xilinx Alveo U280 FPGA accelerator card. The results demonstrate that SDMA achieves $5.68\times $ $14.68\times $ energy efficiency over the previous GPU implementations deployed on the Nvidia GTX 1080Ti and $1.32\times $ higher throughput over the state-of-the-art FPGA prototype.
更多
查看译文
关键词
Algorithm optimization,graph neural networks (GNNs),hardware acceleration,sparse-dense matrix-multiplication (SpMM)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要