Fast Rates in Online Convex Optimization by Exploiting the Curvature of Feasible Sets

CoRR(2024)

引用 0|浏览0
暂无评分
摘要
In this paper, we explore online convex optimization (OCO) and introduce a new analysis that provides fast rates by exploiting the curvature of feasible sets. In online linear optimization, it is known that if the average gradient of loss functions is larger than a certain value, the curvature of feasible sets can be exploited by the follow-the-leader (FTL) algorithm to achieve a logarithmic regret. This paper reveals that algorithms adaptive to the curvature of loss functions can also leverage the curvature of feasible sets. We first prove that if an optimal decision is on the boundary of a feasible set and the gradient of an underlying loss function is non-zero, then the algorithm achieves a regret upper bound of O(ρlog T) in stochastic environments. Here, ρ > 0 is the radius of the smallest sphere that includes the optimal decision and encloses the feasible set. Our approach, unlike existing ones, can work directly with convex loss functions, exploiting the curvature of loss functions simultaneously, and can achieve the logarithmic regret only with a local property of feasible sets. Additionally, it achieves an O(√(T)) regret even in adversarial environments where FTL suffers an Ω(T) regret, and attains an O(ρlog T + √(C ρlog T)) regret bound in corrupted stochastic environments with corruption level C. Furthermore, by extending our analysis, we establish a regret upper bound of O(T^q-2/2(q-1) (log T)^q/2(q-1)) for q-uniformly convex feasible sets, where uniformly convex sets include strongly convex sets and ℓ_p-balls for p ∈ [1,∞). This bound bridges the gap between the O(log T) regret bound for strongly convex sets (q=2) and the O(√(T)) regret bound for non-curved sets (q→∞).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要