SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural Networks
CoRR(2024)
摘要
Weight quantization is used to deploy high-performance deep learning models
on resource-limited hardware, enabling the use of low-precision integers for
storage and computation. Spiking neural networks (SNNs) share the goal of
enhancing efficiency, but adopt an 'event-driven' approach to reduce the power
consumption of neural network inference. While extensive research has focused
on weight quantization, quantization-aware training (QAT), and their
application to SNNs, the precision reduction of state variables during training
has been largely overlooked, potentially diminishing inference performance.
This paper introduces two QAT schemes for stateful neurons: (i) a uniform
quantization strategy, an established method for weight quantization, and (ii)
threshold-centered quantization, which allocates exponentially more
quantization levels near the firing threshold. Our results show that increasing
the density of quantization levels around the firing threshold improves
accuracy across several benchmark datasets. We provide an ablation analysis of
the effects of weight and state quantization, both individually and combined,
and how they impact models. Our comprehensive empirical evaluation includes
full precision, 8-bit, 4-bit, and 2-bit quantized SNNs, using QAT, stateful QAT
(SQUAT), and post-training quantization methods. The findings indicate that the
combination of QAT and SQUAT enhance performance the most, but given the choice
of one or the other, QAT improves performance by the larger degree. These
trends are consistent all datasets. Our methods have been made available in our
Python library snnTorch: https://github.com/jeshraghian/snntorch.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要