Joint multi-user DNN partitioning and task offloading in mobile edge computing.

Ad Hoc Networks(2023)

引用 3|浏览19
暂无评分
摘要
Mobile edge computing is conducive to artificial intelligence computing near terminals, in which Deep Neural Networks (DNNs) should be partitioned to allocate tasks partially to the edge for execution to reduce latency and save energy. Most of the existing studies assume that the tasks are of the same type or the computing resources of the server are the same. In real life, Mobile Devices (MDs) and Edge Servers (ESs) are heterogeneous in type and computing resources, it is challenging to find the optimal partition point for each DNN and offload it to an appropriate ES. To fill this gap, we propose a partitioning-and-offloading scheme for the heterogeneous tasks-server system to reduce the overall system latency and energy consumption on DNN inference. The scheme has four steps. First, it establishes a partitioning and task offloading model for adaptive DNN model. Second, to reduce the solution space, the scheme designs a Partition Point Retain (PPR) algorithm. After that, the scheme gives an Optimal Partition Point (OPP) Algorithm to find the optimal partition point with the minimum cost for each ES corresponding to each MD. Based on the partition points, an offloading of DNN tasks for each MD is presented to finish the whole scheme. Simulations show that the proposed scheme reduces the total cost by 77.9% and 59.9% on average compared to Only-Local and Only-Server respectively in the heterogeneous edge computing environment.
更多
查看译文
关键词
Mobile edge computing,Deep neural network (DNN),DNN partitioning and offloading,Heterogeneous edge computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要