Investigating the Impact of Quantization on Adversarial Robustness
arxiv(2024)
摘要
Quantization is a promising technique for reducing the bit-width of deep
models to improve their runtime performance and storage efficiency, and thus
becomes a fundamental step for deployment. In real-world scenarios, quantized
models are often faced with adversarial attacks which cause the model to make
incorrect inferences by introducing slight perturbations. However, recent
studies have paid less attention to the impact of quantization on the model
robustness. More surprisingly, existing studies on this topic even present
inconsistent conclusions, which prompted our in-depth investigation. In this
paper, we conduct a first-time analysis of the impact of the quantization
pipeline components that can incorporate robust optimization under the settings
of Post-Training Quantization and Quantization-Aware Training. Through our
detailed analysis, we discovered that this inconsistency arises from the use of
different pipelines in different studies, specifically regarding whether robust
optimization is performed and at which quantization stage it occurs. Our
research findings contribute insights into deploying more secure and robust
quantized networks, assisting practitioners in reference for scenarios with
high-security requirements and limited resources.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要