Biologically-inspired training of spiking recurrent neural networks with neuromorphic hardware

2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA(2022)

引用 0|浏览6
暂无评分
摘要
Recurrent spiking neural networks (SNNs) are inspired by the working principles of biological nervous systems that offer unique temporal dynamics and event-based processing. Recently, the error backpropagation through time (BPTT) algorithm has been successfully employed to train SNNs offline, with comparable performance to artificial neural networks (ANNs) on complex tasks. However, BPTT has severe limitations for online learning scenarios of SNNs where the network is required to simultaneously process and learn from incoming data. Specifically, as BPTT separates the inference and update phases, it would require to store all neuronal states for calculating the weight updates backwards in time. To address these fundamental issues, alternative credit assignment schemes are required. Within this context, neuromorphic hardware (NMHW) implementations of SNNs can greatly benefit from in-memory computing (IMC) concepts that follow the brain-inspired collocation of memory and processing, further enhancing their energy efficiency. In this work, we utilize a biologically-inspired local and online training algorithm compatible with IMC, which approximates BPTT, e-prop, and present an approach to support both inference and training of a recurrent SNN using NMHW. To do so, we embed the SNN weights on an in-memory computing NMHW with phase-change memory (PCM) devices and integrate it into a hardware-in-the-loop training setup. We develop our approach with respect to limited precision and imperfections of the analog devices using a PCM-based simulation framework and a NMHW consisting of in-memory computing cores fabricated in 14nm CMOS technology with 256x256 PCM crossbar arrays. We demonstrate that our approach is robust even to 4-bit precision and achieves competitive performance to a floating-point 32-bit realization, while simultaneously equipping the SNN with online training capabilities and exploiting the acceleration benefits of NMHW.
更多
查看译文
关键词
online training, spiking neural networks, neuromorphic hardware, in-memory computing, phase-change memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要