Parallel Placement of Virtualized Network Functions via Federated Deep Reinforcement Learning

Haojun Huang, Jialin Tian,Geyong Min,Hao Yin, Cheng Zeng,Yangming Zhao, Dapeng Oliver Wu

IEEE/ACM Transactions on Networking(2024)

引用 0|浏览2
暂无评分
摘要
Network Function Virtualization (NFV) introduces a new network architecture that offers different network services flexibly and dynamically in the form of Service Function Chains (SFCs), which refer to a set of Virtualization Network Functions (VNFs) chained in a specific order. However, the service latency often increases linearly with the length of SFCs due to the sequential execution of VNFs, resulting in sub-optimal performance for most delay-sensitive applications. In this paper, a novel Parallel VNF Placement (PVFP) approach is proposed for real-world networks via Federated Deep Reinforcement Learning (FDRL). PVFP has three remarkable characteristics distinguishing from previous work: 1) PVFP designs a specific parallel principle, with three parallelism identification rules, to reasonably decide partial VNF parallelism; 2) PVFP considers SFC partition in multi-domains built on their remaining resources and potential parallel VNFs to ensure that VNFs can be reasonably distributed for resource balancing among domains; 3) FDRL-based framework of parallel VNF placement is designed to train a global intelligent model, with time-variant local autonomy explorations, for cross-domain SFC deployment, avoiding data sharing among domains. Simulation results in different scenarios demonstrate that PVFP can significantly reduce the end-to-end latency of SFCs at the medium resource expenditures to place VNFs in multiple administrative domains, compared with the state-of-the-art mechanisms.
更多
查看译文
关键词
Network function virtualization,parallel placement,federated learning,deep reinforcement learning,multiple domains
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要