Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic
International Conference on Computational Linguistics(2023)
摘要
Recent advancements in large language models have showcased their remarkable
generalizability across various domains. However, their reasoning abilities
still have significant room for improvement, especially when confronted with
scenarios requiring multi-step reasoning. Although large language models
possess extensive knowledge, their reasoning often fails to effectively utilize
this knowledge to establish a coherent thinking paradigm. These models
sometimes show hallucinations as their reasoning procedures are unconstrained
by logical principles. Aiming at improving the zero-shot chain-of-thought
reasoning ability of large language models, we propose LoT (Logical Thoughts)
prompting, a self-improvement framework that leverages principles rooted in
symbolic logic, particularly Reductio ad Absurdum, to systematically verify and
rectify the reasoning processes step by step. Experimental evaluations
conducted on language tasks in diverse domains, including arithmetic,
commonsense, symbolic, causal inference, and social problems, demonstrate the
efficacy of enhanced reasoning by logic.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要