Probabilistic Tree-of-thought Reasoning for Answering Knowledge-intensive Complex Questions.
CoRR(2023)
摘要
Large language models (LLMs) are capable of answering knowledge-intensive
complex questions with chain-of-thought (CoT) reasoning. However, they tend to
generate factually incorrect reasoning steps when the required knowledge is not
available or up-to-date in models' parameters. Recent works turn to retrieving
external knowledge to augment CoT reasoning. Despite being promising, these
chain-based methods suffer from: 1) Negative retrieval. Unnecessary or
incorrect retrieval may mislead the reasoning; 2) Limited sight. Lacking the
ability to look backward or forward, a local error in one step will propagate
along the chain.
In this paper, we propose a novel approach: Probabilistic Tree-of-thought
Reasoning (ProbTree). First, LLMs translate a complex question into a query
tree, in which each non-root node denotes a sub-question of its parent node.
Then, probabilistic reasoning is conducted over the tree, by solving questions
from leaf to root considering the confidence of both question decomposing and
answering. During reasoning, for leaf nodes, LLMs choose a more confident
answer from Closed-book QA that employs parametric knowledge and Open-book QA
that employs retrieved external knowledge, thus eliminating the negative
retrieval problem. For non-leaf nodes, with the hierarchical structure, LLMs
have broader sights and are able to globally reason with the information from
child nodes, thus recovering from local errors. The experiments on three
Complex QA datasets under the open-domain setting show that our approach
outperforms SOTA methods significantly, demonstrating the effect of
probabilistic tree-of-thought reasoning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要