Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought
arxiv(2024)
摘要
We introduce a novel framework, LM-Guided CoT, that leverages a lightweight
(i.e., <1B) language model (LM) for guiding a black-box large (i.e., >10B) LM
in reasoning tasks. Specifically, the lightweight LM first generates a
rationale for each input instance. The Frozen large LM is then prompted to
predict a task output based on the rationale generated by the lightweight LM.
Our approach is resource-efficient in the sense that it only requires training
the lightweight LM. We optimize the model through 1) knowledge distillation and
2) reinforcement learning from rationale-oriented and task-oriented reward
signals. We assess our method with multi-hop extractive question answering (QA)
benchmarks, HotpotQA, and 2WikiMultiHopQA. Experimental results show that our
approach outperforms all baselines regarding answer prediction accuracy. We
also find that reinforcement learning helps the model to produce higher-quality
rationales with improved QA performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要