eMBB and URLLC Service Multiplexing Based on Deep Reinforcement Learning in 5G and Beyond

2022 IEEE Wireless Communications and Networking Conference (WCNC)(2022)

引用 1|浏览4
暂无评分
摘要
In 5G, eMBB services are defined to support high data rate, while URLLC services focus on low latency and high reliability. Multiplexing these two services on the same wireless radio frequency leads to a challenging radio resource allocation problem due to their heterogeneous requirements. In this paper, we formulate this problem as two non-linear programming non-convex optimization subproblems, aiming to maximize the average data rate of all eMBB services while satisfying the delay constraint of each URLLC service. We propose an event-driven deep reinforcement learning (DRL) based resource allocation mechanism (EDRL-RAM), which includes two schedulers: eMBB scheduler and URLLC scheduler to achieve long-term optimization of eMBB and URLLC performance. The eMBB scheduler will intelligently allocate resource for each incoming eMBB event, and the URLLC scheduler will intelligently distribute the incoming URLLC event during the ongoing transmissions of eMBB services. The proposed EDRL-RAM makes full use of four different DRL techniques to deal with stochastic event arrivals and network conditions, namely Policy Gradient (PG), Deep Q-learning Network (DQN), Advantage Actor Critic (A2C), and Deep Deterministic Policy Gradient (DDPG), in both eMBB and URLLC schedulers. The simulation results show that in our proposed EDRL-RAM, the order of data rate and delay performance is DDPG, A2C, DQN, and PG. The data rate and delay performance of the proposed EDRL-RAM utilizing any of the four DRL techniques are better than SAFE-TS, which is the best available related work.
更多
查看译文
关键词
5G,eMBB,URLLC,resource allocation,deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要