An Energy Efficient STDP-Based SNN Architecture With On-Chip Learning

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS(2022)

引用 8|浏览6
暂无评分
摘要
In this paper, we propose a spike-time based unsupervised learning method using spiking-timing dependent plasticity (STDP). A simplified linear STDP learning rule is proposed for the energy efficient weight updates. To reduce unnecessary computations for the input spike values, a stop mechanism of the forward pass is introduced in the forward pass. In addition, a hardware-friendly input quantization scheme is used to reduce the computational complexities in both the encoding phase and the forward pass. We construct a two-layer fully-connected spiking neuron network (SNN) based on the proposed method. Compared to general rate-based SNNs trained by STDP, the proposed method reduces the complexity of network architecture (an extra inhibitory layer is not needed) and the computations of synaptic weight updates. According to the fixed-point simulation with 9-bit synaptic weights, the proposed SNN with 6144 excitatory neurons achieves 96% of recognition accuracy on MNIST dataset without any supervision. An SNN processor that contains 384 excitatory neurons with on-chip learning capability is designed and implemented with 28 nm CMOS technology based on the proposed low complexity methods. The SNN processor achieves an accuracy of 93% on MNIST dataset. The implementation results show that the SNN processor achieves a throughput of 277.78k FPS with 0.50 mu J/inference energy consuming in inference mode, and a throughput of 211.77k FPS with 0.66 mu J/learning energy consuming in learning mode.
更多
查看译文
关键词
On-chip learning, spiking neural network, spiking-timing dependent plasticity, temporal coding, unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要