Semi-External Memory Sparse Matrix Multiplication for Billion-Node Graphs

IEEE Trans. Parallel Distrib. Syst.(2017)

引用 28|浏览56
暂无评分
摘要
Sparse matrix multiplication is traditionally performed in memory and scales to large matrices using the distributed memory of multiple nodes. In contrast, we scale sparse matrix multiplication beyond memory capacity by implementing sparse matrix dense matrix multiplication (SpMM) in a semi-external memory (SEM) fashion; i.e., we keep the sparse matrix on commodity SSDs and dense matrices in memory. Our SEM-SpMM incorporates many in-memory optimizations for large power-law graphs. It outperforms the in-memory implementations of Trilinos and Intel MKL and scales to billion-node graphs, far beyond the limitations of memory. Furthermore, on a single large parallel machine, our SEM-SpMM operates as fast as the distributed implementations of Trilinos using five times as much processing power. We also run our implementation in memory (IM-SpMM) to quantify the overhead of keeping data on SSDs. SEM-SpMM achieves almost 100 percent performance of IM-SpMM on graphs when the dense matrix has more than four columns; it achieves at least 65 percent performance of IM-SpMM on all inputs. We apply our SpMM to three important data analysis tasks—PageRank, eigensolving, and non-negative matrix factorization—and show that our SEM implementations significantly advance the state of the art.
更多
查看译文
关键词
Sparse matrices,Memory management,Optimization,Matrix decomposition,Algorithm design and analysis,Registers,Engines
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要