Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement
arxiv(2023)
摘要
To enhance the multi-step reasoning capabilities of large language models,
researchers have extensively explored prompting methods, notably the
Chain-of-Thought (CoT) method which explicitly elicits human-like rationales.
However, they have inadvertently overlooked the potential of enhancing model
reasoning performance by formulating higher-quality problems. In this work, we
start from the problem side and propose Self-Polish (SP), a novel method that
facilitates the model's reasoning by guiding it to progressively refine the
given problems to be more comprehensible and solvable. We also explore several
automatic prompting varients and propose the Self-Polish prompt bank for the
community. SP is orthogonal to all other prompting methods of answer/reasoning
side like CoT, allowing for seamless integration with state-of-the-art
techniques for further improvement. Thorough experiments show that the proposed
method attains notable and consistent effectiveness on five reasoning
benchmarks across different models. Furthermore, our method also showcases
impressive performance on robustness evaluation. Codes and prompts are
available at https://github.com/WooooDyy/Self-Polish.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要