Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach

arxiv(2023)

引用 0|浏览43
暂无评分
摘要
In this paper, we propose an online convex optimization approach with two different levels of adaptivity. On a higher level, our approach is agnostic to the unknown types and curvatures of the online functions, while at a lower level, it can exploit the unknown niceness of the environments and attain problem-dependent guarantees. Specifically, we obtain 𝒪(log V_T), 𝒪(d log V_T) and 𝒪̂(√(V_T)) regret bounds for strongly convex, exp-concave and convex loss functions, respectively, where d is the dimension, V_T denotes problem-dependent gradient variations and the 𝒪̂(·)-notation omits log V_T factors. Our result not only safeguards the worst-case guarantees but also directly implies the small-loss bounds in analysis. Moreover, when applied to adversarial/stochastic convex optimization and game theory problems, our result enhances the existing universal guarantees. Our approach is based on a multi-layer online ensemble framework incorporating novel ingredients, including a carefully designed optimism for unifying diverse function types and cascaded corrections for algorithmic stability. Notably, despite its multi-layer structure, our algorithm necessitates only one gradient query per round, making it favorable when the gradient evaluation is time-consuming. This is facilitated by a novel regret decomposition equipped with carefully designed surrogate losses.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要