Recurrent Drafter for Fast Speculative Decoding in Large Language Models
arxiv(2024)
摘要
In this paper, we introduce an improved approach of speculative decoding
aimed at enhancing the efficiency of serving large language models. Our method
capitalizes on the strengths of two established techniques: the classic
two-model speculative decoding approach, and the more recent single-model
approach, Medusa. Drawing inspiration from Medusa, our approach adopts a
single-model strategy for speculative decoding. However, our method
distinguishes itself by employing a single, lightweight draft head with a
recurrent dependency design, akin in essence to the small, draft model uses in
classic speculative decoding, but without the complexities of the full
transformer architecture. And because of the recurrent dependency, we can use
beam search to swiftly filter out undesired candidates with the draft head. The
outcome is a method that combines the simplicity of single-model design and
avoids the need to create a data-dependent tree attention structure only for
inference in Medusa. We empirically demonstrate the effectiveness of the
proposed method on several popular open source language models, along with a
comprehensive analysis of the trade-offs involved in adopting this approach.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要