Learnability is a Compact Property
CoRR(2024)
摘要
Recent work on learning has yielded a striking result: the learnability of
various problems can be undecidable, or independent of the standard ZFC axioms
of set theory. Furthermore, the learnability of such problems can fail to be a
property of finite character: informally, it cannot be detected by examining
finite projections of the problem.
On the other hand, learning theory abounds with notions of dimension that
characterize learning and consider only finite restrictions of the problem,
i.e., are properties of finite character. How can these results be reconciled?
More precisely, which classes of learning problems are vulnerable to logical
undecidability, and which are within the grasp of finite characterizations?
We demonstrate that the difficulty of supervised learning with metric losses
admits a tight finite characterization. In particular, we prove that the sample
complexity of learning a hypothesis class can be detected by examining its
finite projections. For realizable and agnostic learning with respect to a wide
class of proper loss functions, we demonstrate an exact compactness result: a
class is learnable with a given sample complexity precisely when the same is
true of all its finite projections. For realizable learning with improper loss
functions, we show that exact compactness of sample complexity can fail, and
provide matching upper and lower bounds of a factor of 2 on the extent to which
such sample complexities can differ. We conjecture that larger gaps are
possible for the agnostic case.
At the heart of our technical work is a compactness result concerning
assignments of variables that maintain a class of functions below a target
value, which generalizes Hall's classic matching theorem and may be of
independent interest.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要