Pruning filters with L1-norm and capped L1-norm for CNN compression

APPLIED INTELLIGENCE(2020)

引用 105|浏览25
暂无评分
摘要
The blistering progress of convolutional neural networks (CNNs) in numerous applications of the real-world usually obstruct by a surge in network volume and computational cost. Recently, researchers concentrate on eliminating these issues by compressing the CNN models, such as pruning filters and weights. In comparison with the technique of pruning weights, the technique of pruning filters doesn’t effect in sparse connectivity patterns. In this article, we have proposed a fresh new technique to estimate the significance of filters. More precisely, we combined L1-norm with capped L1-norm to represent the amount of information extracted by the filter and control regularization. In the process of pruning, the insignificant filters remove directly without any loss in the test accuracy, providing much slimmer and compact models with comparable accuracy and this process is iterated a few times. To validate the effectiveness of our algorithm. We experimentally determine the usefulness of our approach with several advanced CNN models on numerous standard data sets. Particularly, data sets CIFAR-10 is used on VGG-16 and prunes 92.7% parameters with float-point-operations (FLOPs) reduction of 75.8% without loss of accuracy and has achieved advancement in state-of-art.
更多
查看译文
关键词
Filter pruning,Capped L1-norm,VGGnet,CIFAR,Convolutional neural network,FLOPs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要