DetermLR: Augmenting LLM-based Logical Reasoning from Indeterminacy to Determinacy
arXiv (Cornell University)(2023)
摘要
Recent advances in large language models (LLMs) have revolutionized the
landscape of reasoning tasks. To enhance the capabilities of LLMs to emulate
human reasoning, prior studies have focused on modeling reasoning steps using
various thought structures like chains, trees, or graphs. However, LLM-based
reasoning still encounters the following challenges: (1) Limited adaptability
of preset structures to diverse tasks; (2) Insufficient precision in exploiting
known conditions to derive new ones; and (3) Inadequate consideration of
historical reasoning experiences for subsequent reasoning steps. To this end,
we propose DetermLR, a novel perspective that rethinks the reasoning process as
an evolution from indeterminacy to determinacy. First, we categorize known
conditions into two types: determinate and indeterminate premises This provides
an oveall direction for the reasoning process and guides LLMs in converting
indeterminate data into progressively determinate insights. Subsequently, we
leverage quantitative measurements to prioritize more relevant premises to
explore new insights. Furthermore, we automate the storage and extraction of
available premises and reasoning paths with reasoning memory, preserving
historical reasoning details for subsequent reasoning steps. Comprehensive
experimental results demonstrate that DetermLR surpasses all baselines on
various logical reasoning benchmarks: LogiQA, ProofWriter, FOLIO, PrOntoQA, and
LogicalDeduction. Compared to previous multi-step reasoning methods, DetermLR
achieves higher accuracy with fewer reasoning steps, highlighting its superior
efficiency and effectiveness in solving logical reasoning tasks.
更多查看译文
关键词
Refactoring,Topic Modeling,Language Modeling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要