MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantization
arxiv(2024)
摘要
In this paper, we present a simple optimization-based preprocessing technique
called Weight Magnitude Reduction (MagR) to improve the performance of
post-training quantization. For each linear layer, we adjust the pre-trained
floating-point weights by solving an ℓ_∞-regularized optimization
problem. This process greatly diminishes the maximum magnitude of the weights
and smooths out outliers, while preserving the layer's output. The preprocessed
weights are centered more towards zero, which facilitates the subsequent
quantization process. To implement MagR, we address the
ℓ_∞-regularization by employing an efficient proximal gradient
descent algorithm. Unlike existing preprocessing methods that involve linear
transformations and subsequent post-processing steps, which can introduce
significant overhead at inference time, MagR functions as a non-linear
transformation, eliminating the need for any additional post-processing. This
ensures that MagR introduces no overhead whatsoever during inference. Our
experiments demonstrate that MagR achieves state-of-the-art performance on the
Llama family of models. For example, we achieve a Wikitext2 perplexity of 5.95
on the LLaMA2-70B model for per-channel INT2 weight quantization without
incurring any inference overhead.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要