DRL-Based Energy-Efficient Baseband Function Deployments for Service-Oriented Open RAN.

IEEE Trans. Green Commun. Netw.(2024)

引用 0|浏览7
暂无评分
摘要
Open Radio Access Network (Open RAN) has gained tremendous attention from industry and academia with decentralized baseband functions across multiple processing units located at different places. However, the ever-expanding scope of RANs, along with fluctuations in resource utilization across different locations and timeframes, necessitates the implementation of robust function management policies to minimize network energy consumption. Most recently developed strategies neglected the activation time and the required energy for the server activation process, while this process could offset the potential energy savings gained from server hibernation. Furthermore, user plane functions, which can be deployed on edge computing servers to provide low-latency services, have not been sufficiently considered. In this paper, a multi-agent deep reinforcement learning (DRL) based function deployment algorithm, coupled with a heuristic method, has been developed to minimize energy consumption while fulfilling multiple requests and adhering to latency and resource constraints. In an 8-MEC network, the DRL-based solution approaches the performance of the benchmark while offering up to 51% energy savings compared to existing approaches. In a larger network of 14-MEC, it maintains a 38% energy-saving advantage and ensures real-time response capabilities. Furthermore, this paper prototypes an Open RAN testbed to verify the feasibility of the proposed solution.
更多
查看译文
关键词
Open RAN,resource optimization,baseband function deployment,energy-efficient,MADDPG
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要