The Price of Adaptivity in Stochastic Convex Optimization
CoRR(2024)
摘要
We prove impossibility results for adaptivity in non-smooth stochastic convex
optimization. Given a set of problem parameters we wish to adapt to, we define
a "price of adaptivity" (PoA) that, roughly speaking, measures the
multiplicative increase in suboptimality due to uncertainty in these
parameters. When the initial distance to the optimum is unknown but a gradient
norm bound is known, we show that the PoA is at least logarithmic for expected
suboptimality, and double-logarithmic for median suboptimality. When there is
uncertainty in both distance and gradient norm, we show that the PoA must be
polynomial in the level of uncertainty. Our lower bounds nearly match existing
upper bounds, and establish that there is no parameter-free lunch.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要