Overcoming Distribution Mismatch in Quantizing Image Super-Resolution Networks
arxiv(2023)
摘要
Although quantization has emerged as a promising approach to reducing
computational complexity across various high-level vision tasks, it inevitably
leads to accuracy loss in image super-resolution (SR) networks. This is due to
the significantly divergent feature distributions across different channels and
input images of the SR networks, which complicates the selection of a fixed
quantization range. Existing works address this distribution mismatch problem
by dynamically adapting quantization ranges to the varying distributions during
test time. However, such a dynamic adaptation incurs additional computational
costs during inference. In contrast, we propose a new quantization-aware
training scheme that effectively Overcomes the Distribution Mismatch problem in
SR networks without the need for dynamic adaptation. Intuitively, this mismatch
can be mitigated by regularizing the distance between the feature and a fixed
quantization range. However, we observe that such regularization can conflict
with the reconstruction loss during training, negatively impacting SR accuracy.
Therefore, we opt to regularize the mismatch only when the gradients of the
regularization are aligned with those of the reconstruction loss. Additionally,
we introduce a layer-wise weight clipping correction scheme to determine a more
suitable quantization range for layer-wise weights. Experimental results
demonstrate that our framework effectively reduces the distribution mismatch
and achieves state-of-the-art performance with minimal computational overhead.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要