Chrome Extension
WeChat Mini Program
Use on ChatGLM

An Event-driven Spiking Neural Network Accelerator with On-chip Sparse Weight.

ISCAS(2022)

Cited 0|Views18
No score
Abstract
Spiking neural networks (SNNs) have widely drew attention of recent research. With brain-spired dynamics and spike-based communication, SNN is supposed to be a more energy-efficient neural network than existing artificial neural network (ANN). To make better use of the temporal sparsity of spikes and spatial sparsity of weights in SNN, this paper presents a sparse SNN accelerator. It adopts a novel selfadaptive spike compressing and decompressing (SASCD) mechanism for different input spike sparsity, as well as on-chip compressed weight storage and processing. We implement the octa-core design on field programmable gate array (FPGA). The results demonstrate a peak performance of 35.84 GSOPs/s, which is equivalent to 358.4 GSOPs/s in dense SNN accelerators for 90% weight sparsity. For the single-layer perceptron model in rate coding implemented on the hardware, SASCD reduces the time step intervals from 2.15 mu s to 0.55 mu s.
More
Translated text
Key words
SNN, neuromorphic processor, sparse weight, sparse spike, FPGA
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined