Towards Efficient Scheduling of Concurrent DNN Training and Inferencing on Accelerated Edges.

CCGridW(2023)

Cited 0|Views3
No score
Abstract
Edge devices are typically used to perform low-latency DNN inferencing close to the data source. However, with accelerated edge devices and privacy-oriented paradigms like Federated Learning, we can increasingly use them for DNN training too. This can require both training and inference workloads to be run concurrently on an edge device, without compromising on the inference latency. Here, we explore such concurrent scheduling on edge devices, and provide initial results demonstrating the interaction of training and inferencing on latency and throughput.
More
Translated text
Key words
Edge accelerator,DNN training and inferencing,GPU
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined