Performance Characterization of Containerized DNN Training and Inference on Edge Accelerators

Prashanthi S. K., Vinayaka Hegde, Keerthana Patchava, Ankita Das,Yogesh Simmhan

arxiv(2023)

Cited 0|Views0
No score
Abstract
Edge devices have typically been used for DNN inferencing. The increase in the compute power of accelerated edges is leading to their use in DNN training also. As privacy becomes a concern on multi-tenant edge devices, Docker containers provide a lightweight virtualization mechanism to sandbox models. But their overheads for edge devices are not yet explored. In this work, we study the impact of containerized DNN inference and training workloads on an NVIDIA AGX Orin edge device and contrast it against bare metal execution on running time, CPU, GPU and memory utilization, and energy consumption. Our analysis shows that there are negligible containerization overheads for individually running DNN training and inference workloads.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined