Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks
CoRR(2023)
Abstract
Large language models (LLMs) are proficient at generating fluent text with
minimal task-specific supervision. Yet, their ability to provide well-grounded
rationalizations for knowledge-intensive tasks remains under-explored. Such
tasks, like commonsense multiple-choice questions, require rationales based on
world knowledge to support predictions and refute alternate options. We
consider the task of generating knowledge-guided rationalization in natural
language by using expert-written examples in a few-shot manner. Surprisingly,
crowd-workers preferred knowledge-grounded rationales over crowdsourced
rationalizations, citing their factuality, sufficiency, and comprehensive
refutations. Although LLMs-generated rationales were preferable, further
improvements in conciseness and novelty are required. In another study, we show
how rationalization of incorrect model predictions erodes humans' trust in
LLM-generated rationales. Motivated by these observations, we create a
two-stage pipeline to review task predictions and eliminate potential incorrect
decisions before rationalization, enabling trustworthy rationale generation.
MoreTranslated text
Key words
large language models,language models,rationalizers,knowledge-intensive
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined