Network Amplification with Efficient MACs Allocation

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 3|浏览67
暂无评分
摘要
Recent studies on deep convolutional neural networks present a simple paradigm of architecture design, i.e., models with more MACs typically achieve better accuracies, such as EfficientNet and RegNet. These works try to enlarge the network architecture with one unified rule by sampling and statistical methods. However, the rule is not prospective to the design of large networks because it is obtained from the experience of researchers on small network architectures. In this paper, we propose to enlarge the capacity of CNN models by fine-grained MACs allocation for the width, depth and resolution on the stage level. In particular, starting from a base small model, we gradually add extra channels, layers or resolution by using a dynamic programming manner. With step-by-step modifying the computations on different stages, the enlarged network will be equipped with optimal allocation and utilization of MACs. On Efficient-Net, our method consistently outperforms the performance of the original scaling method. In particular, the proposed method is used to enlarge models sourced by GhostNet, we achieve state-of-the-art 80.9% and 84.3% ImageNet top-1 accuracies under the setting of 600M and 4.4B MACs, respectively.
更多
查看译文
关键词
network amplification,deep convolutional neural networks,simple paradigm,architecture design,network architecture,unified rule,statistical methods,CNN models,base small model,step-by-step,enlarged network,optimal allocation,Efficient-Net,original scaling method,fine-grained MAC allocation,efficient MAC allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要