DynExit: A Dynamic Early-Exit Strategy for Deep Residual Networks

2019 IEEE International Workshop on Signal Processing Systems (SiPS)(2019)

Cited 18|Views22
No score
Abstract
Early-exit is a kind of technique to terminate a pre-specified computation at an early stage depending on the input samples and has been introduced to reduce energy consumption for Deep Neural Networks (DNNs). Previous early-exit approaches suffered from the burden of manually tuning early-exit loss-weights to find a good trade-off between complexity reduction and system accuracy. In this work, we first propose DynExit, a dynamic loss-weight modification strategy for ResNets, which adaptively modifies the ratio of different exit branches and searches for a proper spot for both accuracy and cost. Then, an efficient hardware unit for early-exit branches is developed, which can be easily integrated to existing hardware architectures of DNNs to reduce average computing latency and energy cost. Experimental results show that the proposed DynExit strategy can reduce up to 43.6% FLOPS compared to the state-of-the-arts approaches. On the other hand, it is able to achieve 1.2% accuracy improvement over the existing end-to-end fixed loss-weight training scheme with comparable computation reduction ratio. The proposed hardware architecture for DynExit is evaluated on the platform of Xilinx Zynq-7000 ZC706 development board. Synthesis results demonstrate that the architecture can achieve high speed with low hardware complexity. To the best of our knowledge, this is the first hardware implementation for early-exit techniques used for DNNs in open literature.
More
Translated text
Key words
Deep neural networks (DNN),early-exit,efficient hardware architecture,adaptive computation,model compression
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined