A Hybrid Deep Reinforcement Learning Approach for Jointly Optimizing Offloading and Resource Management in Vehicular Networks.

Chang-Lin Chen,Bharat K. Bhargava,Vaneet Aggarwal, Basavaraj Tonshal, Amrit Gopal

IEEE Trans. Veh. Technol.(2024)

引用 0|浏览5
暂无评分
摘要
Satisfying the quality of service of data-intensive autonomous driving applications has become challenging. In this work, we propose a novel methodology that optimizes communication, computation, and caching configurations in a vehicular Multi-access edge computing (MEC) system to minimize the average latency of the tasks from the vehicles and maximize the number of tasks finished within the latency requirements. The communication model characterizes bandwidth and power allocation of uplink and downlink transmission in the vehicular MEC system. Our caching model includes variables for each edge server in determining the trade-off between flexibility and hit rate. Finally, the computation model characterizes computation resource allocation. Our method for solving the optimization problem consists of two main steps. First, the deep Q-learning algorithm deals with the optimal assignment of tasks to the edge servers. Then, a greedy approach is applied to the communication, computation, and caching subproblems to decide the bandwidth and power, CPU, and caching strategy, respectively. Simulation results show that our algorithm outperforms several baselines in minimizing latency and maximizing the number of tasks finished within latency requirements, and verify the benefit of including different resource allocation variables in our optimization.
更多
查看译文
关键词
Multiple-access edge computing,caching,software-defined networking,deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要