CAP: Communication-Aware Automated Parallelization for Deep Learning Inference on CMP Architectures

IEEE Transactions on Computers(2022)

引用 2|浏览6
暂无评分
摘要
Real-time inference of deep learning models on embedded and energy-efficient devices becomes increasingly desirable with the rapid growth of artificial intelligence on edge. Specifically, to achieve superb energy-efficiency and scalability, efficient parallelization of single-pass deep neural network (DNN) inference on chip multiprocessor (CMP) architectures is urgently required by many time-sensitive applications. However, as the number of processing cores scales up and the performance of cores has grown much fast, the on-chip inter-core data movement is prone to be a performance bottleneck for computation. To remedy this problem and further improve the performance of network inference, in this work, we introduce a communication-aware DNN parallelization technique called CAP, by exploiting the elasticity and noise-tolerance of deep learning algorithms on CMP. Moreover, in the hope that the conducted studies can provide new design values for real-time neural network inference on embedded chips, we also have evaluated the proposed approach on both multi-core Neural Network Accelerators (NNA) chips and general-purpose chip-multiprocessors. Our experimental results show that the proposed CAP can achieve 1.12×-1.65× system speedups and 1.14×-2.70× energy efficiency for different neural networks while maintaining the inference accuracy, compared to baseline approaches.
更多
查看译文
关键词
Neural networks,parallel processing,real-time and embedded systems,single-chip multiprocessors,reinforcement learning,structured sparsity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要