Tight Concentrations and Confidence Sequences From the Regret of Universal Portfolio

IEEE TRANSACTIONS ON INFORMATION THEORY(2024)

引用 0|浏览21
暂无评分
摘要
A classic problem in statistics is the estimation of the expectation of random variables from samples. This gives rise to the tightly connected problems of deriving concentration inequalities and confidence sequences, i.e., confidence intervals that hold uniformly over time. Previous studies have shown that it is possible to convert the regret guarantee of an online learning algorithm into concentration inequalities, but these concentration results were not tight. In this paper, we show regret guarantees of universal portfolio algorithms applied to the online learning problem of betting give rise to new implicit time-uniform concentration inequalities for bounded random variables. The key feature of our concentration results is that they are centered around the maximum log wealth of the best fixed betting strategy in hindsight. We propose numerical methods to solve these implicit inequalities, which results in confidence sequences that enjoy the empirical Bernstein rate with the optimal asymptotic behavior while being never worse than Bernoulli-KL confidence bounds. We further show that our confidence sequences are never vacuous with even one sample, for any given target failure rate delta is an element of (0,1) . Our empirical study shows that our confidence bounds achieve the state-of-the-art performance, especially in the small sample regime.
更多
查看译文
关键词
Portfolios,Random variables,Testing,Tail,Behavioral sciences,Upper bound,Prediction algorithms,Confidence sequence,regret,universal portfolio
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要