Hypothesis Testing Prompting Improves Deductive Reasoning in Large Language Models
arxiv(2024)
摘要
Combining different forms of prompts with pre-trained large language models
has yielded remarkable results on reasoning tasks (e.g. Chain-of-Thought
prompting). However, along with testing on more complex reasoning, these
methods also expose problems such as invalid reasoning and fictional reasoning
paths. In this paper, we develop Hypothesis Testing Prompting, which
adds conclusion assumptions, backward reasoning, and fact verification during
intermediate reasoning steps. Hypothesis Testing prompting involves
multiple assumptions and reverses validation of conclusions leading to its
unique correct answer. Experiments on two challenging deductive reasoning
datasets ProofWriter and RuleTaker show that hypothesis testing prompting not
only significantly improves the effect, but also generates a more reasonable
and standardized reasoning process.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要