Large Learning Rates Improve Generalization: But How Large Are We Talking About?
CoRR(2023)
摘要
Inspired by recent research that recommends starting neural networks training
with large learning rates (LRs) to achieve the best generalization, we explore
this hypothesis in detail. Our study clarifies the initial LR ranges that
provide optimal results for subsequent training with a small LR or weight
averaging. We find that these ranges are in fact significantly narrower than
generally assumed. We conduct our main experiments in a simplified setup that
allows precise control of the learning rate hyperparameter and validate our key
findings in a more practical setting.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要