Oh! We Freeze: Improving Quantized Knowledge Distillation via Signal Propagation Analysis for Large Language Models
CoRR(2024)
Abstract
Large generative models, such as large language models (LLMs) and diffusion
models have as revolutionized the fields of NLP and computer vision
respectively. However, their slow inference, high computation and memory
requirement makes it challenging to deploy them on edge devices. In this study,
we propose a light-weight quantization aware fine tuning technique using
knowledge distillation (KD-QAT) to improve the performance of 4-bit weight
quantized LLMs using commonly available datasets to realize a popular language
use case, on device chat applications. To improve this paradigm of finetuning,
as main contributions, we provide insights into stability of KD-QAT by
empirically studying the gradient propagation during training to better
understand the vulnerabilities of KD-QAT based approaches to low-bit
quantization errors. Based on our insights, we propose ov-freeze, a simple
technique to stabilize the KD-QAT process. Finally, we experiment with the
popular 7B LLaMAv2-Chat model at 4-bit quantization level and demonstrate that
ov-freeze results in near float-point precision performance, i.e., less than
0.7
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined