Exploring Bit-Level Sparsity for Partial Sum Quantization in Computing-In-Memory Accelerator

2023 IEEE 12th Non-Volatile Memory Systems and Applications Symposium (NVMSA)(2023)

引用 0|浏览7
暂无评分
摘要
Computing-In-Memory (CIM) has demonstrated great potential in boosting the performance and energy efficiency of convolutional neural networks. However, due to the limited size and precision of its memory array, the input and weight matrices of convolution operations have to be split into sub-matrices or even binary sub-matrices, especially when using bit-slicing and single-level cells (SLCs). A large number of partial sums are generated as a result. To maintain high computing precision, high-resolution analog-to-digital converters (ADCs) are used to obtain partial sums at the cost of considerable area and substantial energy overhead. Partial sum quantization (PSQ), a technique that can greatly reduce the resolution of ADC, remains sparsely studied. This paper proposes a novel PSQ approach for CIM - based accelerators by exploring the bit-level sparsity of neural networks. Then, to find the optimal clipping threshold for ADCs, a reparametrized clipping function is also proposed. Finally, we develop a general post-training quantization framework for the PSQ-CIM. Experiments on a variety of neural networks and datasets show that, in typical case (ResNet18 for ImageNet), the required resolution of ADC can be reduced to 2 bits with little accuracy loss (~.92 %) and the hardware efficiency can be improved by 199.7%.
更多
查看译文
关键词
Computing-In-Memory (CIM), partial sum quantization (PSQ), bit-level sparsity, post-training quantization (PTQ)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要