Towards more realistic evaluation of LLM-based code generation: an experimental study and beyond
arxiv(2024)
Abstract
To evaluate the code generation capabilities of Large Language Models (LLMs)
in complex real-world software development scenarios, many evaluation
approaches have been developed. They typically leverage contextual code from
the latest version of a project to facilitate LLMs in accurately generating the
desired function. However, such evaluation approaches fail to consider the
dynamic evolution of software projects over time, which we refer to as
evolving-ignored situation, leading to issues of future context leakage and
useful context missing. This in turn results in inaccurate evaluation of LLMs'
performance. In this paper, we conduct an empirical study to deeply understand
LLMs' code generation performance within settings that reflect the evolving
nature of software development. To achieve this, we first construct an
evolving-aware repository-level code generation dataset, namely HumanEvo,
equipped with an automated execution-based evaluation tool. Second, we manually
categorize HumanEvo according to dependency levels to more comprehensively
analyze the model's performance in generating functions with different
dependency levels. Third, we conduct extensive experiments on HumanEvo with
seven representative and diverse LLMs to verify the effectiveness of the
proposed benchmark. We obtain many important findings through our experimental
study. For example, we find that previous evolving-ignored evaluation
approaches lead to inflated performance of the LLMs, ranging from 10.0
61.1
evaluation of LLMs on code generation. We also build a shared evolving-aware
code generation toolbox to facilitate future research. Replication package
including source code, datasets and appendix is available at
https://github.com/DeepSoftwareAnalytics/EvoEval.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined