Complexity Theoretic Limitations on Learning Halfspaces

STOC '16: Symposium on Theory of Computing Cambridge MA USA June, 2016(2016)

引用 146|浏览112
暂无评分
摘要
We study the problem of agnostically learning halfspaces which is defined by a fixed but unknown distribution D on Q^n X {-1,1}. We define Err_H(D) as the least error of a halfspace classifier for D. A learner who can access D has to return a hypothesis whose error is small compared to Err_H(D). Using the recently developed method of Daniely, Linial and Shalev-Shwartz we prove hardness of learning results assuming that random K-XOR formulas are hard to (strongly) refute. We show that no efficient learning algorithm has non-trivial worst-case performance even under the guarantees that Err_H(D) <= eta for arbitrarily small constant eta>0, and that D is supported in the Boolean cube. Namely, even under these favorable conditions, and for every c>0, it is hard to return a hypothesis with error <= 1/2-n^{-c}. In particular, no efficient algorithm can achieve a constant approximation ratio. Under a stronger version of the assumption (where K can be poly-logarithmic in n), we can take eta = 2^{-log^{1-nu}(n)} for arbitrarily small nu>0. These results substantially improve on previously known results, that only show hardness of exact learning.
更多
查看译文
关键词
Hardness of Learning,Hardness of Learning,Halfspaces,Random XOR
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要