A Multi-Agent Reinforcement Learning Approach for Massive Access in NOMA-URLLC Networks

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY(2023)

引用 1|浏览0
暂无评分
摘要
Ultra-reliable low-latency communication (URLLC) enables diverse applications with rigorous latency and reliability requirements. To provide a wide range of services, the future beyond fifth (B5G) systems are expected to support a large number of URLLC users. In this paper, we propose a joint sub-channel allocation and power control method to support massive access for non-orthogonal multiple access aided URLLC (NOMA-URLLC) networks. We model the problem of maximizing the number of successful access users as a multi-agent reinforcement learning problem. A deep Q-network-based multi-agent reinforcement learning (DQN-MARL) algorithm is proposed to tackle the problem while guaranteeing reliability and latency requirements of URLLC services. Simulation results show that the proposed DQN-MARL algorithm significantly improves the successful access probability in massive access scenarios compared with the existing schemes.
更多
查看译文
关键词
Massive access,multi-agent reinforcement learning,NOMA,URLLC
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要