Dynamic DNNs and Runtime Management for Efficient Inference on Mobile/Embedded Devices
CoRR(2024)
Abstract
Deep neural network (DNN) inference is increasingly being executed on mobile
and embedded platforms due to several key advantages in latency, privacy and
always-on availability. However, due to limited computing resources, efficient
DNN deployment on mobile and embedded platforms is challenging. Although many
hardware accelerators and static model compression methods were proposed by
previous works, at system runtime, multiple applications are typically executed
concurrently and compete for hardware resources. This raises two main
challenges: Runtime Hardware Availability and Runtime Application Variability.
Previous works have addressed these challenges through either dynamic neural
networks that contain sub-networks with different performance trade-offs or
runtime hardware resource management. In this thesis, we proposed a combined
method, a system was developed for DNN performance trade-off management,
combining the runtime trade-off opportunities in both algorithms and hardware
to meet dynamically changing application performance targets and hardware
constraints in real time. We co-designed novel Dynamic Super-Networks to
maximise runtime system-level performance and energy efficiency on
heterogeneous hardware platforms. Compared with SOTA, our experimental results
using ImageNet on the GPU of Jetson Xavier NX show our model is 2.4x faster for
similar ImageNet Top-1 accuracy, or 5.1
also designed a hierarchical runtime resource manager that tunes both dynamic
neural networks and DVFS at runtime. Compared with the Linux DVFS governor
schedutil, our runtime approach achieves up to a 19
latency reduction in single model deployment scenario, and an 89
reduction and a 23
scenario.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined