KnowTuning: Knowledge-aware Fine-tuning for Large Language Models
CoRR(2024)
摘要
Despite their success at many natural language processing (NLP) tasks, large
language models (LLMs) still struggle to effectively leverage knowledge for
knowledge-intensive tasks, manifesting limitations such as generating
incomplete, non-factual, or illogical answers. These limitations stem from
inadequate knowledge awareness of LLMs during vanilla fine-tuning. To address
these problems, we propose a knowledge-aware fine-tuning (KnowTuning) method to
explicitly and implicitly improve the knowledge awareness of LLMs. We devise an
explicit knowledge-aware generation stage to train LLMs to explicitly identify
knowledge triples in answers. We also propose an implicit knowledge-aware
comparison stage to train LLMs to implicitly distinguish between reliable and
unreliable knowledge, in three aspects: completeness, factuality, and
logicality. Extensive experiments on both generic and medical question
answering (QA) datasets confirm the effectiveness of KnowTuning, through
automatic and human evaluations, across various sizes of LLMs. Finally, we
demonstrate that the improvements of KnowTuning generalize to unseen QA
datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要