Chrome Extension
WeChat Mini Program
Use on ChatGLM

ResDecode: Accelerating Large Language Models Inference Via Residual Decoding Heads

Ziqian Zeng, Jiahong Yu, Qianshi Pang,Zihao Wang,Huiping Zhuang,Fan Yu, Hongen Shao,Xiaofeng Zou

Big Data Mining and Analytics(2025)

Cited 0|Views2
Key words
speculative decoding,efficient inference,Large Language Models (LLMs)
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined