Quantization-Error-Robust Deep Neural Network for Embedded Accelerators

IEEE Transactions on Circuits and Systems II: Express Briefs(2022)

引用 2|浏览4
暂无评分
摘要
Quantization with low precision has become an essential technique for adopting deep neural networks in energy- and memory-constrained devices. However, there is a limit to the reducing precision by the inevitable loss of accuracy due to the quantization error. To overcome this obstacle, we propose methods reforming and quantizing a network that achieves high accuracy even at low precision without any runtime overhead in embedded accelerators. Our proposition consists of two analytical approaches: 1) network optimization to find the most error-resilient equivalent network in the precision-constrained environment and 2) quantization exploiting adaptive rounding offset control. The experimental results show accuracies of up to 98.31% and 99.96% of floating-point results in 6-bit and 8-bit quantization networks, respectively. Besides, our methods allow the lower precision accelerator design, reducing the energy consumption by 8.5%.
更多
查看译文
关键词
Deep neural network,accelerator,quantization,rescaling equivalent,adaptive rounding offset control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要