A Transfer Learning Assisted Framework to Expedite and Self-Adapt Bandwidth Allocations in Low-Latency H2M Applications

IEEE COMMUNICATIONS MAGAZINE(2023)

引用 0|浏览1
暂无评分
摘要
In view of the aspirations of 6G, networks will soon be expected to support the delivery of tactile-haptic and kinetic perceptions so that humans can interact with real/virtual environments through machines/robots. This requires lowering the end-to-end network latency to sub-milliseconds, thus driving technology advancements at network edge, encompassing access and enterprise networks. This article focuses on predictive bandwidth allocation schemes in support of low- latency human-to- machine ( H2M) communications in access networks. In the past, classic schemes have relied on statistical predictions to predict bandwidth demands and consequently make bandwidth allocation decisions. More recently, machine learning (ML) techniques have been investigated to improve prediction accuracy. While the use of ML is promising, it incurs learning time with most techniques unable to learn quickly and adapt to changing traffic conditions, thus affecting the latency performance. The ability to achieve fast and self-adaptive bandwidth allocation decisions for H2M traffic in meeting its low latency requirement, is thus critical. To address the challenge, we propose a novel framework, termed TransfER Learning Assisted framework ( TERLA), that incorporates reinforcement learning to support self-adaptive bandwidth decision exploration for H2M traffic in conjunction with transfer learning to reduce learning time. We present its proof-of-concept, showing the use of simulation-based decision-value experiences as source knowledge to efficiently guide self- adaptive bandwidth decisions for empirical target H2M traffic. Results highlight that TERLA not only reduces H2M latency by self-adapting to optimal bandwidth decisions but also has the advantage of expediting learning time by two orders of magnitude when compared to existing schemes.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要