Towards Optimal Learning of Language Models
CoRR(2024)
摘要
This work studies the general principles of improving the learning of
language models (LMs), which aims at reducing the necessary training steps for
achieving superior performance. Specifically, we present a theory for the
optimal learning of LMs. We first propose an objective that optimizes LM
learning by maximizing the data compression ratio in an
"LM-training-as-lossless-compression" view. Then, we derive a theorem, named
Learning Law, to reveal the properties of the dynamics in the optimal learning
process under our objective. The theorem is then validated by experiments on a
linear classification and a real-world language modeling task. Finally, we
empirically verify that the optimal learning of LMs essentially stems from the
improvement of the coefficients in the scaling law of LMs, indicating great
promise and significance for designing practical learning acceleration methods.
Our code can be found at https://aka.ms/LearningLaw.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要