How Far Are LLMs from Believable AI? A Benchmark for Evaluating the Believability of Human Behavior Simulation
arxiv(2023)
摘要
In recent years, AI has demonstrated remarkable capabilities in simulating
human behaviors, particularly those implemented with large language models
(LLMs). However, due to the lack of systematic evaluation of LLMs' simulated
behaviors, the believability of LLMs among humans remains ambiguous, i.e., it
is unclear which behaviors of LLMs are convincingly human-like and which need
further improvements. In this work, we design SimulateBench to evaluate the
believability of LLMs when simulating human behaviors. In specific, we evaluate
the believability of LLMs based on two critical dimensions: 1) consistency: the
extent to which LLMs can behave consistently with the given information of a
human to simulate; and 2) robustness: the ability of LLMs' simulated behaviors
to remain robust when faced with perturbations. SimulateBench includes 65
character profiles and a total of 8,400 questions to examine LLMs' simulated
behaviors. Based on SimulateBench, we evaluate the performances of 10 widely
used LLMs when simulating characters. The experimental results reveal that
current LLMs struggle to align their behaviors with assigned characters and are
vulnerable to perturbations in certain factors.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要