Relevant or Random: Can LLMs Truly Perform Analogical Reasoning?
CoRR(2024)
Abstract
Analogical reasoning is a unique ability of humans to address unfamiliar
challenges by transferring strategies from relevant past experiences. One key
finding in psychology is that compared with irrelevant past experiences,
recalling relevant ones can help humans better handle new tasks.
Coincidentally, the NLP community has also recently found that self-generating
relevant examples in the context can help large language models (LLMs) better
solve a given problem than hand-crafted prompts. However, it is yet not clear
whether relevance is the key factor eliciting such capability, i.e., can LLMs
benefit more from self-generated relevant examples than irrelevant ones? In
this work, we systematically explore whether LLMs can truly perform analogical
reasoning on a diverse set of reasoning tasks. With extensive experiments and
analysis, we show that self-generated random examples can surprisingly achieve
comparable or even better performance, e.g., 4
random biological examples. We find that the accuracy of self-generated
examples is the key factor and subsequently design two improved methods with
significantly reduced inference costs. Overall, we aim to advance a deeper
understanding of LLM analogical reasoning and hope this work stimulates further
research in the design of self-generated contexts.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined