LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models
arxiv(2023)
摘要
Today's large language models (LLMs) typically train on short text segments
(e.g., <4K tokens) due to the quadratic complexity of their Transformer
architectures. As a result, their performance suffers drastically on inputs
longer than those encountered during training, substantially limiting their
applications in real-world tasks involving long contexts such as encoding
scientific articles, code repositories, or long dialogues. Through theoretical
analysis and empirical investigation, this work identifies three major factors
contributing to this length generalization failure. Our theoretical analysis
further reveals that commonly used techniques like truncating the attention
window or relative positional encodings are inadequate to address them.
Answering these challenges, we propose LM-Infinite, a simple and effective
method for enhancing LLMs' capabilities of handling long contexts. LM-Infinite
is highly flexible and can be used with most modern LLMs off-the-shelf. Without
any parameter updates, it allows LLMs pre-trained with 2K or 4K-long segments
to generalize to up to 200M length inputs while retaining perplexity. It also
improves performance on downstream tasks such as Passkey Retrieval and Qasper
in the zero-shot setting. LM-Infinite brings substantial efficiency
improvements: it achieves 2.7x decoding speed up and 7.5x memory saving over
the original model. Our code will be publicly available upon publication.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要