Tune without Validation: Searching for Learning Rate and Weight Decay on Training Sets
arxiv(2024)
摘要
We introduce Tune without Validation (Twin), a pipeline for tuning learning
rate and weight decay without validation sets. We leverage a recent theoretical
framework concerning learning phases in hypothesis space to devise a heuristic
that predicts what hyper-parameter (HP) combinations yield better
generalization. Twin performs a grid search of trials according to an
early-/non-early-stopping scheduler and then segments the region that provides
the best results in terms of training loss. Among these trials, the weight norm
strongly correlates with predicting generalization. To assess the effectiveness
of Twin, we run extensive experiments on 20 image classification datasets and
train several families of deep networks, including convolutional, transformer,
and feed-forward models. We demonstrate proper HP selection when training from
scratch and fine-tuning, emphasizing small-sample scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要