Filter pruning via expectation-maximization

Neural Computing and Applications(2022)

引用 0|浏览13
暂无评分
摘要
The redundancy in convolutional neural networks (CNNs) causes a significant number of extra parameters resulting in increased computation and less diverse filters. In this paper, we introduce filter pruning via expectation-maximization (FPEM) to trim redundant structures and improve the diversity of remaining structures. Our method is designed based on the discovery that the filter diversity of pruned networks is positively correlated with its performance. The expectation step divides filters into groups by maximum likelihood layer-wisely, and averages the output feature maps for each cluster. The maximization step calculates the likelihood estimation of clusters and formulates a loss function to make the distributions in the same cluster consistent. After training, the intra-cluster redundant filters can be trimmed and only intra-cluster diverse filters are retained. Experiments conducted on CIFAR-10 have outperformed the corresponding full models. On ImageNet ILSVRC12, FPEM reduces 46.5% FLOPs on ResNet-50 with only 0.36% Top-1 accuracy decrease, which advances the state-of-arts. In particular, the FPEM offers strong generalization performance on the object detection task.
更多
查看译文
关键词
Expectation maximization,CNN compression,CNN pruning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要