Code Repair with LLMs gives an Exploration-Exploitation Tradeoff
CoRR(2024)
Abstract
Iteratively improving and repairing source code with large language models
(LLMs), known as refinement, has emerged as a popular way of generating
programs that would be too complex to construct in one shot. Given a bank of
test cases, together with a candidate program, an LLM can improve that program
by being prompted with failed test cases. But it remains an open question how
to best iteratively refine code, with prior work employing simple greedy or
breadth-first strategies. We show here that refinement exposes an
explore-exploit tradeoff: exploit by refining the program that passes the most
test cases, or explore by refining a lesser considered program. We frame this
as an arm-acquiring bandit problem, which we solve with Thompson Sampling. The
resulting LLM-based program synthesis algorithm is broadly applicable: Across
loop invariant synthesis, visual reasoning puzzles, and competition programming
problems, we find that our new method can solve more problems using fewer
language model calls.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined