LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code
arxiv(2024)
摘要
Large Language Models (LLMs) applied to code-related applications have
emerged as a prominent field, attracting significant interest from both
academia and industry. However, as new and improved LLMs are developed,
existing evaluation benchmarks (e.g., HumanEval, MBPP) are no longer sufficient
for assessing their capabilities. In this work, we propose LiveCodeBench, a
comprehensive and contamination-free evaluation of LLMs for code, which
continuously collects new problems over time from contests across three
competition platforms, namely LeetCode, AtCoder, and CodeForces. Notably, our
benchmark also focuses on a broader range of code related capabilities, such as
self-repair, code execution, and test output prediction, beyond just code
generation. Currently, LiveCodeBench hosts four hundred high-quality coding
problems that were published between May 2023 and February 2024. We have
evaluated 9 base LLMs and 20 instruction-tuned LLMs on LiveCodeBench. We
present empirical findings on contamination, holistic performance comparisons,
potential overfitting in existing benchmarks as well as individual model
comparisons. We will release all prompts and model completions for further
community analysis, along with a general toolkit for adding new scenarios and
model
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要