Efficient Neuromorphic Hardware Through Spiking Temporal Online Local Learning

IEEE Transactions on Very Large Scale Integration (VLSI) Systems(2022)

引用 1|浏览2
暂无评分
摘要
Local learning schemes have shown promising performance in spiking neural networks (SNNs) training and are considered a step toward more biologically plausible learning. Despite many efforts to design high-performance neuromorphic systems, a fast and efficient on-chip training algorithm is still missing, which limits the deployment of neuromorphic systems in many real-time applications. This work proposes a scalable, fast, and efficient spiking neuromorphic hardware system with on-chip local learning capability. We introduce an effective hardware-friendly local training algorithm compatible with sparse temporal input coding and binary random classification weights. The algorithm is demonstrated to deliver competitive accuracy in different tasks. The proposed digital system explores spike sparsity in communication, parallelism in vector–matrix operations and process-level dataflow, and locality of training errors, which leads to low cost and fast training speed. The system is optimized under various performance metrics. Taking into consideration energy, speed, resources, and accuracy, the proposed method shows around $10\times $ efficiency over a recent work with a direct feedback alignment (DFA) method and $4.5\times $ efficiency over the spike-timing-dependent plasticity (STDP) method. Moreover, our hardware architecture can easily scale up with the network size at a linear rate. Thus, our method has demonstrated great potential for use in various applications, especially those demanding low latency.
更多
查看译文
关键词
Backpropagation (BP),deep learning,local training,neuromorphic computing,spiking neural networks (SNNs)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要