Toward the Predictability of Dynamic Real-Time DNN Inference

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2022)

Cited 3|Views20
No score
Abstract
Deep neural networks (DNNs) have been widely used in many cyber–physical systems (CPSs). However, it is still a challenging work to deploy DNNs in real-time systems. In particular, the execution time of DNN inference must be predictable, s.t. it could be known whether the runtime inference can complete within a required timing constraint. Moreover, the timing constraints may change dynamically with the runtime environment in many embedded applications, such as autonomous cars. A possible way to meet such dynamic real-time requirements is to execute different subnetworks of a DNN at runtime. However, improper construction of subnetworks may not only introduce unpredictable inference time, s.t. the real-timing constraints could be violated unexpectedly, but also has poor compatibility with the well-optimized machine learning framework (e.g., TensorFlow). In this article, we study the predictability when executing different subnetworks of a DNN. In particular, we present a featurewise runtime adaptation framework for DNN inference, which is implemented and validated on NVIDIA Jetson TX2 and Nano with TensorFlow. The experimental results show that our method can achieve predictable inference time in comparison with the state-of-the-art methods.
More
Translated text
Key words
Deep neural network (DNN),predictability,pruning,runtime
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined