Chrome Extension
WeChat Mini Program
Use on ChatGLM

The Effects of Approximate Multiplication on Convolutional Neural Networks

IEEE Transactions on Emerging Topics in Computing(2022)

Cited 38|Views44
No score
Abstract
This article analyzes the effects of approximate multiplication when performing inferences on deep convolutional neural networks (CNNs). The approximate multiplication can reduce the cost of the underlying circuits so that CNN inferences can be performed more efficiently in hardware accelerators. The study identifies the critical factors in the convolution, fully-connected, and batch normalization layers that allow more accurate CNN predictions despite the errors from approximate multiplication. The same factors also provide an arithmetic explanation of why bfloat16 multiplication performs well on CNNs. The experiments are performed with recognized network architectures to show that the approximate multipliers can produce predictions that are nearly as accurate as the FP32 references, without additional training. For example, the ResNet and Inception-v4 models with Mitch- $w$ 6 multiplication produces Top-5 errors that are within 0.2 percent compared to the FP32 references. A brief cost comparison of Mitch- $w$ 6 against bfloat16 is presented where a MAC operation saves up to 80 percent of energy compared to the bfloat16 arithmetic. The most far-reaching contribution of this article is the analytical justification that multiplications can be approximated while additions need to be exact in CNN MAC operations.
More
Translated text
Key words
Machine learning,computer vision,object recognition,arithmetic and logic units,low-power design
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined