Stochastic Weight Pruning And The Role Of Regularization In Shaping Network Structure

NEUROCOMPUTING(2021)

引用 3|浏览15
暂无评分
摘要
The pressing need to reduce the capacity of deep neural networks has stimulated the development of network dilution methods and their analysis. In this study we present a framework for neural network pruning by sampling from a probability function that favors the zeroing of smaller parameters. This procedure of stochastically setting network weights to zero is done after each parameter updating step in the network learning algorithm. As part of the proposed framework, we examine the contribution of L-1 and L-2 regularization to the dynamics of pruning larger network structures such as neurons and filters while optimizing for weight pruning. We then demonstrate the effectiveness of the proposed stochastic pruning framework when used together with regularization terms for different network architectures and image analysis tasks. Specifically, we show that using our method we can successfully remove more than 50% of the channels/filters in VGG-16 and MobileNetV2 for CIFAR10 classification; in ResNet56 for CIFAR100 classification; in a U-Net for instance segmentation of biological cells; and in a CNN model tailored for COVID-19 detection. For these filter-pruned networks, we also present competitive weight pruning results while maintaining the accuracy levels of the original, dense networks. (C) 2021 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
Neural network compression, Weight pruning, Node pruning, Weight decay, COVID-19, Pruning dynamics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要