MiLe Loss: a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models
CoRR(2023)
摘要
Generative language models are usually pretrained on large text corpus via
predicting the next token (i.e., sub-word/word/phrase) given the previous ones.
Recent works have demonstrated the impressive performance of large generative
language models on downstream tasks. However, existing generative language
models generally neglect an inherent challenge in text corpus during training,
i.e., the imbalance between frequent tokens and infrequent ones. It can lead a
language model to be dominated by common and easy-to-learn tokens, thereby
overlooking the infrequent and difficult-to-learn ones. To alleviate that, we
propose a MiLe Loss function for mitigating the bias of learning difficulties
with tokens. During training, it can dynamically assess the learning difficulty
of a to-be-learned token, according to the information entropy of the
corresponding predicted probability distribution over the vocabulary. Then it
scales the training loss adaptively, trying to lead the model to focus more on
the difficult-to-learn tokens. On the Pile dataset, we train generative
language models at different scales of 468M, 1.2B, and 6.7B parameters.
Experiments reveal that models incorporating the proposed MiLe Loss can gain
consistent performance improvement on downstream benchmarks.
更多查看译文
关键词
language models,learning difficulties,bias
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要