ParaML: A Polyvalent Multicore Accelerator for Machine Learning

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2020)

引用 5|浏览408
暂无评分
摘要
In recent years, machine learning (ML) techniques are proven to be powerful tools in various emerging applications. Traditionally, ML techniques are processed on general-purpose CPUs and GPUs, but their energy efficiencies are limited due to their excessive support for flexibility. As an efficient alternative to CPUs/GPUs, hardware accelerators are still limited as they often accommodate only a single ML technique (family). However, different problems may require different ML techniques, which implies that such accelerators may achieve poor learning accuracy or even be ineffective. In this paper, we present a polyvalent accelerator architecture integrated with multiple processing cores, called ParaML, which accommodates ten representative ML techniques, including k-means, k-nearest neighbors (k-NN), naive Bayes (NB), support vector machine (SVM), linear regression (LR), classification tree (CT), deep neural network (DNN), learning vector quantization (LVQ), parzen window (PW), and principal component analysis (PCA). Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, the single-core ParaML can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm 2 and consumes 596 mW only, estimated by ICC and PrimeTime PX with postsynthesis netlist, respectively. Compared with the NVIDIA K20M GPU (28-nm process), the single-core ParaML (65-nm process) is 1.21× faster, and can reduce the energy by 137.93×. We also compare the single-core ParaML with other accelerators. Compared with PRINS, single-core ParaML achieves 72.09× and 2.57× energy benefit for k-NN and k-means, respectively, and speeds up each query in k-NN by 44.76×. Compared with EIE, the single-core ParaML achieves 5.02× speedup and 4.97× energy benefit with 11.62× less area when evaluating with dense DNN. Compared with TPU, the single-core ParaML achieves 2.45× better power efficiency (5647 Gop/W versus 2300 Gop/W) with 321.36× less area. Compared to the single-core version, the 8-core ParaML will further improve the speedup up to 3.98× with an area of 13.44 mm 2 and a power of 2036 mW.
更多
查看译文
关键词
Accelerator,machine learning (ML) techniques,multicore accelerator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要