Defining Problem from Solutions: Inverse Reinforcement Learning (IRL) and Its Applications for Next-Generation Networking
arxiv(2024)
摘要
Performance optimization is a critical concern in networking, on which Deep
Reinforcement Learning (DRL) has achieved great success. Nonetheless, DRL
training relies on precisely defined reward functions, which formulate the
optimization objective and indicate the positive/negative progress towards the
optimal. With the ever-increasing environmental complexity and human
participation in Next-Generation Networking (NGN), defining appropriate reward
functions become challenging. In this article, we explore the applications of
Inverse Reinforcement Learning (IRL) in NGN. Particularly, if DRL aims to find
optimal solutions to the problem, IRL finds a problem from the optimal
solutions, where the optimal solutions are collected from experts, and the
problem is defined by reward inference. Specifically, we first formally
introduce the IRL technique, including its fundamentals, workflow, and
difference from DRL. Afterward, we present the motivations of IRL applications
in NGN and survey existing studies. Furthermore, to demonstrate the process of
applying IRL in NGN, we perform a case study about human-centric prompt
engineering in Generative AI-enabled networks. We demonstrate the effectiveness
of using both DRL and IRL techniques and prove the superiority of IRL.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要