Evaluating performance of Parallel Matrix Multiplication Routine on Intel KNL and Xeon Scalable Processors

2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C)(2020)

引用 2|浏览4
暂无评分
摘要
In high-performance computing, xGEMM routine is the core of Level 3 BLAS operation to achieve matrix-matrix multiplications. The performance of Parallel xGEMM (PxGEMM) is significantly affected by two major factors: Firstly, the flop rate that can be achieved by calculating matrix-matrix multiplication on each node. Secondly, communication costs for broadcasting sub-matrices to others. In this paper, an approach to improve and adjust PDGEMM routine for modern Intel computers: Knights Landing (KNL) and Xeon Scalable Processors (SKL) is proposed. This approach consists of two methods to deal with the factors mentioned above. First, the improvement of PDGEMM for the computation part is suggested based on a blocked matrix-matrix multiplication algorithm by providing better fits for architectures of KNL and SKL to deliver a better block size computation. Second, a communication routine with MPI is proposed to overcome default settings of BLACS which is a part of communication, to improve a time-wise cost efficiency. The proposed PDGEMM achieves similar performance on smaller size matrices as PDGEMM from ScaLAPACK and Intel MKL on 16 node Intel KNL. Furthermore, the proposed PDGEMM achieves better performance (on smaller size matrices) compared to PDGEMM from ScaLAPACK and Intel MKL on 16 nodes Xeon scalable processors.
更多
查看译文
关键词
Parallel matrix-matrix multiplication,Parallel BLAS,ScaLAPACK,Intel Knights Landing,Intel Xeon Scalable,AVX-512
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要