Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales for Pruning Recurrent SNN
ICLR 2024(2024)
摘要
Recurrent Spiking Neural Networks (RSNNs) have emerged as a computationally
efficient and brain-inspired learning model. The design of sparse RSNNs with
fewer neurons and synapses helps reduce the computational complexity of RSNNs.
Traditionally, sparse SNNs are obtained by first training a dense and complex
SNN for a target task, and, then, pruning neurons with low activity
(activity-based pruning) while maintaining task performance. In contrast, this
paper presents a task-agnostic methodology for designing sparse RSNNs by
pruning a large randomly initialized model. We introduce a novel Lyapunov Noise
Pruning (LNP) algorithm that uses graph sparsification methods and utilizes
Lyapunov exponents to design a stable sparse RSNN from a randomly initialized
RSNN. We show that the LNP can leverage diversity in neuronal timescales to
design a sparse Heterogeneous RSNN (HRSNN). Further, we show that the same
sparse HRSNN model can be trained for different tasks, such as image
classification and temporal prediction. We experimentally show that, in spite
of being task-agnostic, LNP increases computational efficiency (fewer neurons
and synapses) and prediction performance of RSNNs compared to traditional
activity-based pruning of trained dense models.
更多查看译文
关键词
spiking neural network,SNN,network pruning,stability,neuromorphic,leaky integrate and fire,STDP,sparsification,task-agnostic pruning,timescale optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要