Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning
CoRR(2023)
摘要
Large Language Models (LLMs) prompted to generate chain-of-thought (CoT)
exhibit impressive reasoning capabilities. Recent attempts at prompt
decomposition toward solving complex, multi-step reasoning problems depend on
the ability of the LLM to simultaneously decompose and solve the problem. A
significant disadvantage is that foundational LLMs are typically not available
for fine-tuning, making adaptation computationally prohibitive. We believe (and
demonstrate) that problem decomposition and solution generation are distinct
capabilites, better addressed in separate modules, than by one monolithic LLM.
We introduce DaSLaM, which uses a decomposition generator to decompose complex
problems into subproblems that require fewer reasoning steps. These subproblems
are answered by a solver. We use a relatively small (13B parameters) LM as the
decomposition generator, which we train using policy gradient optimization to
interact with a solver LM (regarded as black-box) and guide it through
subproblems, thereby rendering our method solver-agnostic. Evaluation on
multiple different reasoning datasets reveal that with our method, a 175
billion parameter LM (text-davinci-003) can produce competitive or even better
performance, compared to its orders-of-magnitude larger successor, GPT-4.
Additionally, we show that DaSLaM is not limited by the solver's capabilities
as a function of scale; e.g., solver LMs with diverse sizes give significant
performance improvement with our solver-agnostic decomposition technique.
Exhaustive ablation studies evince the superiority of our modular finetuning
technique over exorbitantly large decomposer LLMs, based on prompting alone.
更多查看译文
关键词
larger language models,language models,complex,fine-tuned
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要