Fast rates by transferring from auxiliary hypotheses

Machine Learning(2016)

引用 27|浏览91
暂无评分
摘要
In this work we consider the learning setting where, in addition to the training set, the learner receives a collection of auxiliary hypotheses originating from other tasks. We focus on a broad class of ERM-based linear algorithms that can be instantiated with any non-negative smooth loss function and any strongly convex regularizer. We establish generalization and excess risk bounds, showing that, if the algorithm is fed with a good combination of source hypotheses, generalization happens at the fast rate 𝒪(1/m) instead of the usual 𝒪(1/√(m)) . On the other hand, if the source hypotheses combination is a misfit for the target task, we recover the usual learning rate. As a byproduct of our study, we also prove a new bound on the Rademacher complexity of the smooth loss class under weaker assumptions compared to previous works.
更多
查看译文
关键词
Fast-rate generalization bounds,Transfer learning,Domain adaptation,Rademacher complexity,Smooth loss functions,Strongly-convex regularizers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要