Distributed Robust Bandits With Efficient Communication

IEEE Transactions on Network Science and Engineering(2023)

引用 0|浏览16
暂无评分
摘要
The Distributed Multi-Armed Bandit (DMAB) is a powerful framework for studying many network problems. The DMAB is typically studied in a paradigm, where signals activate each agent with a fixed probability, and the rewards revealed to agents are assumed to be generated from fixed and unknown distributions, i.e., stochastic rewards, or arbitrarily manipulated by an adversary, i.e., adversarial rewards. However, this paradigm fails to capture the dynamics and uncertainties of many real-world applications, where the signal that activates an agent, may not follow any distribution, and the rewards might be partially stochastic and partially adversarial. Motivated by this, we study the asynchronously stochastic DMAB problem with adversarial corruptions where the agent is activated arbitrarily, and rewards initially sampled from distributions might be corrupted by an adversary. The objectives are to simultaneously minimize the regret and communication cost, while robust to corruption. To address all these issues, we propose a Robust and Distributed Active Arm Elimination algorithm, namely RDAAE, which only needs to transmit one real number (e.g., an arm index, or a reward) per communication. We theoretically prove that the performance of regret and communication cost smoothly degrades when the corruption level increases.
更多
查看译文
关键词
Distributed multi-agent bandit (DMAB),Adversarial corruptions,Cooperation,Robust learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要