Causal Evaluation of Language Models
arxiv(2024)
摘要
Causal reasoning is viewed as crucial for achieving human-level machine
intelligence. Recent advances in language models have expanded the horizons of
artificial intelligence across various domains, sparking inquiries into their
potential for causal reasoning. In this work, we introduce Causal evaluation of
Language Models (CaLM), which, to the best of our knowledge, is the first
comprehensive benchmark for evaluating the causal reasoning capabilities of
language models. First, we propose the CaLM framework, which establishes a
foundational taxonomy consisting of four modules: causal target (i.e., what to
evaluate), adaptation (i.e., how to obtain the results), metric (i.e., how to
measure the results), and error (i.e., how to analyze the bad results). This
taxonomy defines a broad evaluation design space while systematically selecting
criteria and priorities. Second, we compose the CaLM dataset, comprising
126,334 data samples, to provide curated sets of causal targets, adaptations,
metrics, and errors, offering extensive coverage for diverse research pursuits.
Third, we conduct an extensive evaluation of 28 leading language models on a
core set of 92 causal targets, 9 adaptations, 7 metrics, and 12 error types.
Fourth, we perform detailed analyses of the evaluation results across various
dimensions (e.g., adaptation, scale). Fifth, we present 50 high-level empirical
findings across 9 dimensions (e.g., model), providing valuable guidance for
future language model development. Finally, we develop a multifaceted platform,
including a website, leaderboards, datasets, and toolkits, to support scalable
and adaptable assessments. We envision CaLM as an ever-evolving benchmark for
the community, systematically updated with new causal targets, adaptations,
models, metrics, and error types to reflect ongoing research advancements.
Project website is at https://opencausalab.github.io/CaLM.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要