Functional Interpolation for Relative Positions Improves Long Context Transformers
ICLR 2024(2023)
摘要
Preventing the performance decay of Transformers on inputs longer than those
used for training has been an important challenge in extending the context
length of these models. Though the Transformer architecture has fundamentally
no limits on the input sequence lengths it can process, the choice of position
encoding used during training can limit the performance of these models on
longer inputs. We propose a novel functional relative position encoding with
progressive interpolation, FIRE, to improve Transformer generalization to
longer contexts. We theoretically prove that this can represent some of the
popular relative position encodings, such as T5's RPE, Alibi, and Kerple. We
next empirically show that FIRE models have better generalization to longer
contexts on both zero-shot language modeling and long text benchmarks.
更多查看译文
关键词
Transformers,positional encoding,long context,length generalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要