Transformers Can Achieve Length Generalization But Not Robustly
CoRR(2024)
摘要
Length generalization, defined as the ability to extrapolate from shorter
training sequences to longer test ones, is a significant challenge for language
models. This issue persists even with large-scale Transformers handling
relatively straightforward tasks. In this paper, we test the Transformer's
ability of length generalization using the task of addition of two integers. We
show that the success of length generalization is intricately linked to the
data format and the type of position encoding. Using the right combination of
data format and position encodings, we show for the first time that standard
Transformers can extrapolate to a sequence length that is 2.5x the input
length. Nevertheless, unlike in-distribution generalization, length
generalization remains fragile, significantly influenced by factors like random
weight initialization and training data order, leading to large variances
across different random seeds.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要