No-regret learning for repeated non-cooperative games with lossy bandits

AUTOMATICA(2024)

引用 0|浏览1
暂无评分
摘要
This paper considers no-regret learning for repeated continuous-kernel games with lossy bandit feed-back. Since it is difficult to give an explicit model of the utility functions in dynamic environments, the players' actions can only be learned with bandit feedback. Moreover, due to unreliable communication channels or privacy protection, the bandit feedback may be lost or dropped at random. Therefore, we study the asynchronous online learning strategy of the players to adaptively adjust the next actions for minimizing the long-term regret loss. The paper provides a novel no-regret learning algorithm, called Online Gradient Descent with lossy bandits (OGD-lb). We first give the regret analysis for concave games with differentiable and Lipschitz utilities. Then we show that the action profile converges to a Nash equilibrium with probability 1 when the game is also strictly monotone. We further provide the (root i k-1/3) mean-squared convergence rate O Np-2 when the game is beta-strongly monotone, where N denotes the number of players and pi is the update probability. In addition, we extend the algorithm to the case when the loss probability of the bandit feedback is unknown, and prove its almost sure convergence to Nash equilibrium for strictly monotone games. Finally, we take the resource management in fog computing as an application example, and carry out numerical experiments to empirically demonstrate the algorithm performance. (c) 2023 Elsevier Ltd. All rights reserved.
更多
查看译文
关键词
Online learning,No -regret learning,Repeated games,Lossy bandits
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要