Efficient Deep Spiking Multi-Layer Perceptrons with Multiplication-Free Inference
arxiv(2023)
摘要
Advancements in adapting deep convolution architectures for Spiking Neural
Networks (SNNs) have significantly enhanced image classification performance
and reduced computational burdens. However, the inability of
Multiplication-Free Inference (MFI) to align with attention and transformer
mechanisms, which are critical to superior performance on high-resolution
vision tasks, imposing limitations on these gains. To address this, our
research explores a new pathway, drawing inspiration from the progress made in
Multi-Layer Perceptrons (MLPs). We propose an innovative spiking MLP
architecture that uses batch normalization to retain MFI compatibility and
introducing a spiking patch encoding layer to enhance local feature extraction
capabilities. As a result, we establish an efficient multi-stage spiking MLP
network that blends effectively global receptive fields with local feature
extraction for comprehensive spike-based computation. Without relying on
pre-training or sophisticated SNN training techniques, our network secures a
top-1 accuracy of 66.39
trained spiking ResNet-34 by 2.67
costs, model parameters, and simulation steps. An expanded version of our
network compares with the performance of the spiking VGG-16 network with a
71.64
smaller. Our findings highlight the potential of our deep SNN architecture in
effectively integrating global and local learning abilities. Interestingly, the
trained receptive field in our network mirrors the activity patterns of
cortical cells. Source codes are publicly accessible at
https://github.com/EMI-Group/mixer-snn.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要