(Ir)rationality and Cognitive Biases in Large Language Models
CoRR(2024)
摘要
Do large language models (LLMs) display rational reasoning? LLMs have been
shown to contain human biases due to the data they have been trained on;
whether this is reflected in rational reasoning remains less clear. In this
paper, we answer this question by evaluating seven language models using tasks
from the cognitive psychology literature. We find that, like humans, LLMs
display irrationality in these tasks. However, the way this irrationality is
displayed does not reflect that shown by humans. When incorrect answers are
given by LLMs to these tasks, they are often incorrect in ways that differ from
human-like biases. On top of this, the LLMs reveal an additional layer of
irrationality in the significant inconsistency of the responses. Aside from the
experimental results, this paper seeks to make a methodological contribution by
showing how we can assess and compare different capabilities of these types of
models, in this case with respect to rational reasoning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要