谷歌Chrome浏览器插件
订阅小程序
在清言上使用

Optimal Multiclass U-Calibration Error and Beyond

Haipeng Luo, Spandan Senapati,Vatsal Sharan

CoRR(2024)

引用 0|浏览1
暂无评分
摘要
We consider the problem of online multiclass U-calibration, where a forecaster aims to make sequential distributional predictions over K classes with low U-calibration error, that is, low regret with respect to all bounded proper losses simultaneously. Kleinberg et al. (2023) developed an algorithm with U-calibration error O(K√(T)) after T rounds and raised the open question of what the optimal bound is. We resolve this question by showing that the optimal U-calibration error is Θ(√(KT)) – we start with a simple observation that the Follow-the-Perturbed-Leader algorithm of Daskalakis and Syrgkanis (2016) achieves this upper bound, followed by a matching lower bound constructed with a specific proper loss (which, as a side result, also proves the optimality of the algorithm of Daskalakis and Syrgkanis (2016) in the context of online learning against an adversary with finite choices). We also strengthen our results under natural assumptions on the loss functions, including Θ(log T) U-calibration error for Lipschitz proper losses, O(log T) U-calibration error for a certain class of decomposable proper losses, U-calibration error bounds for proper losses with a low covering number, and others.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要