Exploiting deep learning accelerators for neuromorphic workloads

Pao-Sheng Vincent Sun, Alexander Titterton, Anjlee Gopiani, Tim Santos,Arindam Basu,Wei D. Lu,Jason K. Eshraghian

NEUROMORPHIC COMPUTING AND ENGINEERING(2024)

引用 0|浏览3
暂无评分
摘要
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method for training SNNs, but in a twist of irony, when training on modern graphics processing units this becomes more expensive than non-spiking networks. The emergence of Graphcore's intelligence processing units (IPUs) balances the parallelized nature of deep learning workloads with the sequential, reusable, and sparsified nature of operations prevalent when training SNNs. IPUs adopt multi-instruction multi-data parallelism by running individual processing threads on smaller data blocks, which is a natural fit for the sequential, non-vectorized steps required to solve spiking neuron dynamical state equations. We present an IPU-optimized release of our custom SNN Python package, snnTorch, which exploits fine-grained parallelism by utilizing low-level, pre-compiled custom operations to accelerate irregular and sparse data access patterns that are characteristic of training SNN workloads. We provide a rigorous performance assessment across a suite of commonly used spiking neuron models, and propose methods to further reduce training run-time via half-precision training. By amortizing the cost of sequential processing into vectorizable population codes, we ultimately demonstrate the potential for integrating domain-specific accelerators with the next generation of neural networks.
更多
查看译文
关键词
hardware accelerator,neuromorphic computing,spiking neural network,snnTorch,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要