Chrome Extension
WeChat Mini Program
Use on ChatGLM

Memory-Efficient Batch Normalization by One-Pass Computation for On-Device Training.

IEEE Trans. Circuits Syst. II Express Briefs(2024)

Cited 0|Views13
No score
Abstract
Batch normalization (BN) has become ubiquitous in modern deep learning architectures because of its remarkable improvement in deep neural network (DNN) training performance. However, the two-pass computation of statistical estimation and element-wise normalization in BN training requires two accesses to the input data, resulting in a huge increase in off-chip memory traffic during DNN training. In this paper, we propose a novel accelerator, named one-pass normalizer (OPN) to achieve memory-efficient BN for on-device training. Specifically, in terms of dataflow, we propose one-pass computation based on sampling-based range normalization and sparse data recovery techniques to reduce BN off-chip memory access. Regarding the OPN circuit, we propose channel-wise constant extraction to achieve a compact design. Experimental results show that the one-pass computation reduces off-chip memory access of BN by 2.0 3.8× compared with the previous state-of-the-art designs while maintaining training performance. Moreover, the channel-wise constant extraction saves the gate count and power consumption of OPN by 56% and 73%, respectively.
More
Translated text
Key words
Memory-efficient accelerator,Deep neural networks,Batch normalization,On-device training,One-pass computation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined