Learning From Correctness Without Prompting Makes LLM Efficient Reasoner
arxiv(2024)
摘要
Large language models (LLMs) have demonstrated outstanding performance across
various tasks, yet they still exhibit limitations such as hallucination,
unfaithful reasoning, and toxic content. One potential approach to mitigate
these issues is learning from human or external feedback (e.g. tools). In this
paper, we introduce an intrinsic self-correct reasoning framework for LLMs that
eliminates the need for human feedback, external tools, and handcraft prompts.
The proposed framework, based on a multi-step reasoning paradigm
Learning from Correctness (LeCo), improves reasoning
performance without needing to learn from errors. This paradigm prioritizes
learning from correct reasoning steps, and a unique method to measure
confidence for each reasoning step based on generation logits. Experimental
results across various multi-step reasoning tasks demonstrate the effectiveness
of the framework in improving reasoning performance with reduced token
consumption.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要