LLM-based Test-driven Interactive Code Generation: User Study and Empirical Evaluation
arxiv(2024)
摘要
Large language models (LLMs) have shown great potential in automating
significant aspects of coding by producing natural code from informal natural
language (NL) intent. However, given NL is informal, it does not lend easily to
checking that the generated code correctly satisfies the user intent. In this
paper, we propose a novel interactive workflow TiCoder for guided intent
clarification (i.e., partial formalization) through tests to support the
generation of more accurate code suggestions. Through a mixed methods user
study with 15 programmers, we present an empirical evaluation of the
effectiveness of the workflow to improve code generation accuracy. We find that
participants using the proposed workflow are significantly more likely to
correctly evaluate AI generated code, and report significantly less
task-induced cognitive load. Furthermore, we test the potential of the workflow
at scale with four different state-of-the-art LLMs on two python datasets,
using an idealized proxy for a user feedback. We observe an average absolute
improvement of 38.43
and across all LLMs within 5 user interactions, in addition to the automatic
generation of accompanying unit tests.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要