Hardware Aware Spiking Neural Network Training and Its Mixed-Signal Implementation for Non-Volatile In-Memory Computing Accelerators.

2023 30th IEEE International Conference on Electronics, Circuits and Systems (ICECS)(2023)

引用 0|浏览5
暂无评分
摘要
Spiking Neural Networks (SNNs) emulate the computational prowess and energy efficiency of the human brain. However, deploying SNNs practically can often pose challenges due to hardware constraints. This paper introduces a method that effectively tackles this problem through hardware-aware network selection and quantization of SNNs, thus bridging the gap between neural network architectures and hardware capabilities. The effectiveness of this approach is tested on three diverse datasets: Fashion MNIST (FMNIST), SHD-10, and a QT database Electrocardiogram (ECG) data set. Notably, our method achieves competitive quantized accuracies of 85.7%, 85.7%, and 85.19% on these datasets respectively. These results are significant as they are achieved with the use of qint8 precision, demonstrating only a minor accuracy loss from the full-precision float32 counterparts, despite significant reductions in model complexity to meet hardware constraints. Additionally, we propose a mixed-signal implementation of the Leaky Integrate-and-Fire (LIF) neuron, taking advantage of the benefits of both domains and making it compatible with In-Memory Computing (IMC) accelerators. By leveraging the benefits of non-volatile memory technologies, this research facilitates the deployment of SNNs on real-world hardware accelerators with minimal accuracy loss. Our work is instrumental in highlighting the potential of mixed-signal IMC in balancing flexibility and power efficiency trade-offs, making it particularly valuable for ultra-low power edge devices.
更多
查看译文
关键词
Spiking Neural Networks,quantization,inmemory computing,non-volatile memories,edge computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要