Atomlayer: a universal reRAM-based CNN accelerator with atomic layer computation

2018 55TH ACM/ESDA/IEEE DESIGN AUTOMATION CONFERENCE (DAC)(2018)

引用 101|浏览67
暂无评分
摘要
Although ReRAM-based convolutional neural network (CNN) accelerators have been widely studied, state-of-the-art solutions suffer from either incapability of training (e.g., ISSAC [1]) or inefficiency of inference (e.g., PipeLayer [2]) due to the pipeline design. In this work, we propose AtomLayer---a universal ReRAM-based accelerator to support both efficient CNN training and inference. AtomLayer uses the atomic layer computation which processes only one network layer each time to eliminate the pipeline related issues such as long latency, pipeline bubbles and large on-chip buffer overhead. For further optimization, we use a unique filter mapping and a data reuse system to minimize the cost of layer switching and DRAM access. Our experimental results show that AtomLayer can achieve higher power efficiency than ISSAC in inference (1.1×) and PipeLayer in training (1.6×), respectively, meanwhile reducing the footprint by 15×.
更多
查看译文
关键词
universal ReRAM-based CNN accelerator,atomic layer computation,ReRAM-based convolutional neural network accelerators,PipeLayer,pipeline design,efficient CNN training,network layer each time,pipeline related issues,pipeline bubbles,layer switching,AtomLayer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要