Energy-Efficient Machine Learning Accelerator for Binary Neural Networks

GLSVLSI '20: Great Lakes Symposium on VLSI 2020 Virtual Event China September, 2020(2020)

引用 10|浏览24
暂无评分
摘要
Binary neural network (BNN) has shown great potential to be implemented with power efficiency and high throughput. Compared with its counterpart, the convolutional neural network (CNN), BNN is trained with binary constrained weights and activations, which are more suitable for edge devices with less computing and storage resource requirements. In this paper, we introduce the BNN characteristics, basic operations and the binarized-network optimization methods. Then we summarize several accelerator designs for BNN hardware implementation by using three mainstream structures, i.e., ReRAM-based crossbar, FPGA and ASIC. Based on the BNN characteristics and hardware custom designs, all these methods achieve massively parallelized computations and highly pipelined data flow to enhance its latency and throughput performance. In addition, the intermediate data with the binary format are stored and processed on chip by constructing the computing-in-memory (CIM) architecture to reduce the off-chip communication costs, including power and latency.
更多
查看译文
关键词
machine learning,energy-efficient
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要