CATERPILLAR: Coarse Grain Reconfigurable Architecture for accelerating the training of Deep Neural Networks

Yuanfang Li,Ardavan Pedram

2017 IEEE 28th International Conference on Application-specific Systems, Architectures and Processors (ASAP)(2017)

引用 24|浏览42
暂无评分
摘要
Accelerating the inference of a trained DNN is a well studied subject. In this paper we switch the focus to the training of DNNs. The training phase is compute intensive, demands complicated data communication, and contains multiple levels of data dependencies and parallelism. This paper presents an algorithm/architecture space exploration of efficient accelerators to achieve better network convergence rates and higher energy efficiency for training DNNs. We further demonstrate that an architecture with hierarchical support for collective communication semantics provides flexibility in training various networks performing both stochastic and batched gradient descent based techniques. Our results suggest that smaller networks favor non-batched techniques while performance for larger networks is higher using batched operations. At 45nm technology, CATERPILLAR achieves performance efficiencies of 177 GFLOPS/W at over 80% utilization for SGD training on small networks and 211 GFLOPS/W at over 90% utilization for pipelined SGD/CP training on larger networks using a total area of 103.2 mm 2 and 178.9 mm 2 respectively.
更多
查看译文
关键词
CATERPILLAR,coarse grain reconfigurable architecture,deep neural networks training,DNN training,inference acceleration,algorithm/architecture space exploration,collective communication semantics,stochastic gradient descent,batched gradient descent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要