"Correct answers" from the psychology of artificial intelligence

arxiv(2023)

引用 0|浏览9
暂无评分
摘要
We re-replicate 14 psychology studies from the Many Labs 2 replication project (Klein et al., 2018) with OpenAI's text-davinci-003 model, colloquially known as GPT3.5. Among the eight studies we could analyse, our GPT sample replicated 37.5% of the original results and 37.5% of the Many Labs 2 results. We could not analyse the remaining six studies, due to an unexpected phenomenon we call the "correct answer" effect. Different runs of GPT3.5 answered nuanced questions probing political orientation, economic preference, judgement, and moral philosophy with zero or near-zero variation in responses: with the supposedly "correct answer." Most but not all of these "correct answers" were robust to changing the order of answer choices. One exception occurred in the Moral Foundations Theory survey (Graham et al., 2009), for which GPT3.5 almost always identified as a conservative in the original condition (N=1,030, 99.6%) and as a liberal in the reverse-order condition (N=1,030, 99.3%). GPT3.5's responses to subsequent questions revealed post-hoc rationalisation; there was a relative bias in the direction of its previously reported political orientation. But both self-reported GPT conservatives and self-reported GPT liberals revealed right-leaning Moral Foundations, although the right-leaning bias of self-reported GPT liberals was weaker. We hypothesise that this pattern was learned from a conservative bias in the model's largely Internet-based training data. Since AI models of the future may be trained on much of the same Internet data as GPT3.5, our results raise concerns that a hypothetical AI-led future may be subject to a diminished diversity of thought.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要