Boosting the Convergence of Reinforcement Learning-Based Auto-Pruning Using Historical Data

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS(2024)

引用 0|浏览12
暂无评分
摘要
Recently, neural network compression schemes like channel pruning have been widely used to reduce the model size and computational complexity of deep neural networks (DNNs) for applications in power-constrained scenarios, such as embedded systems. Reinforcement learning (RL)-based auto-pruning has been further proposed to automate the DNN pruning process to avoid expensive hand-crafted work. However, the RL-based pruner involves a time-consuming training process, and pruning and evaluating each network comes at high-computational expense. These problems have greatly restricted the real-world application of RL-based auto-pruning. Thus, we propose an efficient auto-pruning framework that solves this problem by taking advantage of the historical data from the previous auto-pruning process. In our framework, we first boost the convergence of the RL-pruner by transfer learning. Then, an augmented transfer learning scheme is proposed to further speed up the training process by improving the transferability. Finally, an assistant learning process is proposed to improve the sample efficiency of the RL agent. The experiments show that our framework can accelerate the auto-pruning process by 1.5x-2.5x for ResNet20, and 1.81x-2.375x for other neural networks , such as ResNet56, ResNet18, and MobileNet v1.
更多
查看译文
关键词
Task analysis,Auto-pruning,deep neural network (DNN),reinforcement learning (RL)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要