ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base
arxiv(2023)
Abstract
Analogical reasoning is a fundamental cognitive ability of humans. However,
current language models (LMs) still struggle to achieve human-like performance
in analogical reasoning tasks due to a lack of resources for model training. In
this work, we address this gap by proposing ANALOGYKB, a million-scale analogy
knowledge base (KB) derived from existing knowledge graphs (KGs). ANALOGYKB
identifies two types of analogies from the KGs: 1) analogies of the same
relations, which can be directly extracted from the KGs, and 2) analogies of
analogous relations, which are identified with a selection and filtering
pipeline enabled by large language models (LLMs), followed by minor human
efforts for data quality control. Evaluations on a series of datasets of two
analogical reasoning tasks (analogy recognition and generation) demonstrate
that ANALOGYKB successfully enables both smaller LMs and LLMs to gain better
analogical reasoning capabilities.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined