Testing Calibration in Nearly-Linear Time
arxiv(2024)
摘要
In the recent literature on machine learning and decision making, calibration
has emerged as a desirable and widely-studied statistical property of the
outputs of binary prediction models. However, the algorithmic aspects of
measuring model calibration have remained relatively less well-explored.
Motivated by [BGHN23], which proposed a rigorous framework for measuring
distances to calibration, we initiate the algorithmic study of calibration
through the lens of property testing. We define the problem of calibration
testing from samples where given n draws from a distribution 𝒟 on
(predictions, binary outcomes), our goal is to distinguish between the case
where 𝒟 is perfectly calibrated, and the case where 𝒟
is ε-far from calibration.
We make the simple observation that the empirical smooth calibration linear
program can be reformulated as an instance of minimum-cost flow on a
highly-structured graph, and design an exact dynamic programming-based solver
for it which runs in time O(nlog^2(n)), and solves the calibration testing
problem information-theoretically optimally in the same time. This improves
upon state-of-the-art black-box linear program solvers requiring
Ω(n^ω) time, where ω > 2 is the exponent of matrix
multiplication. We also develop algorithms for tolerant variants of our testing
problem improving upon black-box linear program solvers, and give sample
complexity lower bounds for alternative calibration measures to the one
considered in this work. Finally, we present experiments showing the testing
problem we define faithfully captures standard notions of calibration, and that
our algorithms scale efficiently to accommodate large sample sizes.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要