Distributed Channel Allocation for Mobile 6G Subnetworks via Multi-Agent Deep Q-Learning.

WCNC(2023)

引用 0|浏览7
暂无评分
摘要
Sixth generation (6G) in-X subnetworks are recently proposed as short-range low-power radio cells for supporting localized extreme wireless connectivity inside entities such as industrial robots, vehicles, and the human body. The deployment of in-X subnetworks in these entities may lead to fast changes in the interference level and hence, varying risks of communication failure. In this paper, we investigate fully distributed resource allocation for interference mitigation in dense deployments of 6G in-X subnetworks. Resource allocation is cast as a multiagent reinforcement learning problem and agents are trained in a simulated environment to perform channel selection with the goal of maximizing the per-subnetwork rate subject to a target rate constraint for each device. To overcome the slow convergence and performance degradation issues associated with fully distributed learning, we adopt a centralized training procedure involving local training of a deep Q-network (DQN) at a central location with measurements obtained at all subnetworks. The policy is implemented using Double Deep Q-Network (DDQN) due to its ability to enhance training stability and convergence. Performance evaluation results in an in-factory environment indicated that the proposed method can achieve up to 19% rate increase relative to random allocation and is only marginally worse than complex centralized benchmarks.
更多
查看译文
关键词
Machine learning,reinforcement learning,interference management,beyond 5G networks,resource allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要