谷歌浏览器插件
订阅小程序
在清言上使用

A Systolic SNN Inference Accelerator and its Co-optimized Software Framework

Proceedings of the 2019 on Great Lakes Symposium on VLSI(2019)

引用 19|浏览35
暂无评分
摘要
Although Deep Neural Network (DNN) architectures have made some breakthroughs in computer vision tasks, they are not close to biological brain neurons. Spiking Neural Network (SNN) is highly expected to bridge the gap between artificial computing systems and bio-systems. And it also shows great potential in low power computing. This paper presents a low power hardware accelerator for SNN inference using systolic array, and a corresponding software framework for optimization. First, we give the hardware design which adopts systolic array inspired by explorations of SNN. Then we ensure correct data mapping for systolic array for the sake of computational correctness. Next, we use compression methods for decreasing both the runtime and memory footprint. Finally, we make the systolic array size-configurable to adapt to different input, so as to reduce computational overhead. We implement the accelerator on Xilinx FPGA V7 690T. The experimental results show that SNN inference on our scheme suffers little loss on accuracy (less than 0.1%) on MNIST and Fashion-MNIST, and the runtime of the time-consuming layers decreases. The total power of our scheme is 0.745 W at 100 MHz.
更多
查看译文
关键词
compression, size-configurable, spiking neural network, systolic array
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要