Chrome Extension
WeChat Mini Program
Use on ChatGLM

Structured Pruning of CNNs at Initialization

ICLR 2023(2023)

Cited 0|Views82
No score
Abstract
Pruning-at-initialization (PAI) proposes to prune the individual weights of the CNN before training, thus avoiding expensive fine-tuning or retraining of the pruned model. While PAI shows promising results in reducing model size, the pruned model still requires unstructured sparse matrix computation, making it difficult to achieve wall-clock speedups. In this work, we show theoretically and empirically that the accuracy of CNN models pruned by PAI methods only depends on the fraction of remaining parameters in each layer (i.e., layer-wise density), regardless of the granularity of pruning. We formulate the PAI problem as a convex optimization of our newly proposed expectation-based proxy for model accuracy, which leads to finding the optimal layer-wise density of that specific model. Based on our formulation, we further propose a structured and hardware-friendly PAI method, named PreCrop, to prune or reconfigure CNNs in the channel dimension. Our empirical results show that PreCrop achieves a higher accuracy than existing PAI methods on several modern CNN architectures, including ResNet, MobileNetV2, and EfficientNet for both CIFAR-10 and ImageNet. PreCrop achieves an accuracy improvement of up to $2.7\%$ over the state-of-the-art PAI algorithm when pruning MobileNetV2 on ImageNet. PreCrop also improves the accuracy of EfficientNetB0 by $0.3\%$ on ImageNet with only $80\%$ of the parameters and the same FLOPs.
More
Translated text
Key words
Pruning,Pruning-at-Initialization,Structured Pruning,Efficient Deep Learning,Efficient Model
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined