A Greedy Algorithm For Quantizing Neural Networks
JOURNAL OF MACHINE LEARNING RESEARCH(2021)
摘要
We propose a new computationally efficient method for quantizing the weights of pre trained neural networks that is general enough to handle both multi-layer perceptrons and convolutional neural networks. Our method deterministically quantizes layers in an iterative fashion with no complicated re-training required. Specifically, we quantize each neuron, or hidden unit, using a greedy path-following algorithm. This simple algorithm is equivalent to running a dynamical system, which we prove is stable for quantizing a single-layer neural network (or, alternatively, for quantizing the first layer of a multi-layer network) when the training data are Gaussian. We show that under these assumptions, the quantization error decays with the width of the layer, i.e., its level of over-parametrization. We provide numerical experiments, onY multi-layer networks, to illustrate the performance of our methods on MNIST and CIFAR10 data, as well as for quantizing the VGG16 network using ImageNet data.
更多查看译文
关键词
quantization, neural networks, deep learning, stochastic control, discrepancy theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络