Fast Change Identification in Multi-Play Bandits and its Applications in Wireless Networks

IEEE TRANSACTIONS ON COMMUNICATIONS(2023)

引用 0|浏览0
暂无评分
摘要
Next-generation wireless services are characterized by diverse requirements. To sustain several such applications, the wireless access points need to probe the users in the network periodically in an energy-efficient manner. We study a novel multi-armed bandit (MAB) setting that mandates probing all the arms periodically while keeping track of the best current arm in a piece-wise stationary environment. We develop TS-GE that balances the regret guarantees of the classical Thompson sampling (TS) with the broadcast probing (BP) of all the arms simultaneously in order to actively detect a change in the reward distributions. The main innovation is in identifying the changed arm by an optional indexing subroutine called group exploration (GE) that scales as $\log _{2}(K)$ for a $K-$ armed bandit setting. We characterize the probability of missed detection and the probability of false alarms in terms of environmental parameters. We highlight the conditions for which the regret guarantee of TS-GE outperforms that of the state-of-the-art passively-adaptive and actively-adaptive algorithms, in particular, ADSWITCH. We demonstrate the efficacy of TS-GE by employing it in two wireless system applications - task offloading in mobile-edge computing (MEC) and an industrial Internet-of-Things (I-IoT) network designed for simultaneous wireless information and power transfer (SWIPT).
更多
查看译文
关键词
Program processors,Wireless communication,Heuristic algorithms,Optimization,Task analysis,Servers,Proposals,Multi-armed bandits,Thompson sampling,non-stationarity,online learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要