An Automatic Neural Network Architecture-and-Quantization Joint Optimization Framework for Efficient Model Inference.

Lian Liu, Ying Wang ,Xiandong Zhao,Weiwei Chen,Huawei Li, Xiaowei Li ,Yinhe Han

IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.(2024)

Cited 0|Views9
No score
Abstract
Efficient deep learning models, especially optimized for edge devices, benefit from low inference latency to efficient energy consumption. Two classical techniques for efficient model inference are lightweight neural architecture search (NAS), which automatically designs compact network models, and quantization, which reduces the bit-precision of neural network models. As a consequence, joint design for both neural architecture and quantization precision settings is becoming increasingly popular. There are three main aspects that affect the performance of the joint optimization between neural architecture and quantization: quantization precision selection (QPS), quantization aware training (QAT), and neural architecture searching (NAS). However, existing works focus on at most twofold of these aspects, and result in secondary performance. To this end, we proposed a novel automatic optimization framework, DAQUDAQU is an ancient liquor fermentation process., that allows jointly searching for Pareto-optimal neural architecture and quantization precision combination among more than 1047 quantized subnet models. To overcome the instability of the conventional automatic optimization framework, DAQU incorporates a warm-up strategy to reduce the accuracy gap among different neural architectures, and a precision-transfer training approach to maintain flexibility among different quantization precision settings. Our experiments show that the quantized lightweight neural networks generated by DAQU consistently outperform state-of-the-art NAS and quantization joint optimization methods.
More
Translated text
Key words
Neural architecture search,network quantization,automatic joint optimization,efficient model inference
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined