Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model
ICLR 2024(2023)
摘要
Training deep networks requires various design decisions regarding for
instance their architecture, data augmentation, or optimization. In this work,
we find these training variations to result in networks learning unique feature
sets from the data. Using public model libraries comprising thousands of models
trained on canonical datasets like ImageNet, we observe that for arbitrary
pairings of pretrained models, one model extracts significant data context
unavailable in the other – independent of overall performance. Given any
arbitrary pairing of pretrained models and no external rankings (such as
separate test sets, e.g. due to data privacy), we investigate if it is possible
to transfer such "complementary" knowledge from one model to another without
performance degradation – a task made particularly difficult as additional
knowledge can be contained in stronger, equiperformant or weaker models. Yet
facilitating robust transfer in scenarios agnostic to pretrained model pairings
would unlock auxiliary gains and knowledge fusion from any model repository
without restrictions on model and problem specifics - including from weaker,
lower-performance models. This work therefore provides an initial, in-depth
exploration on the viability of such general-purpose knowledge transfer. Across
large-scale experiments, we first reveal the shortcomings of standard knowledge
distillation techniques, and then propose a much more general extension through
data partitioning for successful transfer between nearly all pretrained models,
which we show can also be done unsupervised. Finally, we assess both the
scalability and impact of fundamental model properties on successful
model-agnostic knowledge transfer.
更多查看译文
关键词
transfer learning,pretraining,weak-to-strong transfer,continual learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要