Is ChatGPT Humanly Irrational?

Research Square (Research Square)(2023)

引用 0|浏览2
暂无评分
摘要
Abstract We delve into the fascinating crossroads of artificial intelligence (AI) and cognitive science, spotlighting the OpenAI advanced language model, ChatGPT. Renowned for generating human-like text, ChatGPT has been widely used in various applications. However, its ability to replicate human cognitive processes, particularly decision-making behavior, remains largely unexplored and untapped. We evaluate ChatGPT's decision-making patterns and show that they strikingly mirror those of human subjects, even patterns traditionally termed ''irrational'' under standard economic theory. This finding challenges the prevailing assumption that AI systems operate solely on rational computations. It suggests that, despite its algorithmic nature, ChatGPT can reflect human cognitive biases when simulating human roles, thus adding a new dimension to our understanding of AI behaviour. Our result places AI models like ChatGPT in a broader context of cognitive science, indicating their potential to mimic not just human language but also human cognitive processes. From a broader perspective, our findings underscore the capacity of AI in behavioral research and stimulate a necessary dialogue on AI design, transparency, and ethical implications. Our study bridges human and machine intelligence, highlighting the potential of AI to enhance our understanding of decision-making processes in artificial agents.
更多
查看译文
关键词
chatgpt humanly irrational
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要