Almost Optimal Algorithms for Linear Stochastic Bandits with Heavy-Tailed Payoffs.

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)(2018)

引用 38|浏览51
暂无评分
摘要
In linear stochastic bandits, it is commonly assumed that payoffs are with sub-Gaussian noises. In this paper, under a weaker assumption on noises, we study the problem of linear stochastic bandits with heavy-tailed payoffs (LinBET), where the distributions have finite moments of order 1 + epsilon, for some epsilon is an element of (0, 1]. We rigorously analyze the regret lower bound of LinBET as Omega(T1/1+epsilon), implying that finite moments of order 2 (i.e., finite variances) yield the bound of Omega(root T), with T being the total number of rounds to play bandits. The provided lower bound also indicates that the state-of-the-art algorithms for LinBET are far from optimal. By adopting median of means with a well-designed allocation of decisions and truncation based on historical information, we develop two novel bandit algorithms, where the regret upper bounds match the lower bound up to polylogarithmic factors. To the best of our knowledge, we are the first to solve LinBET optimally in the sense of the polynomial order on T. Our proposed algorithms are evaluated based on synthetic datasets, and outperform the state-of-the-art results.
更多
查看译文
关键词
parallel optimization,upper bounds,historical information
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要