L2MAC: Large Language Model Automatic Computer for Extensive Code Generation
arxiv(2023)
摘要
Transformer-based large language models (LLMs) are constrained by the fixed
context window of the underlying transformer architecture, hindering their
ability to produce long and coherent outputs. Memory-augmented LLMs are a
promising solution, but current approaches cannot handle long output generation
tasks since they (1) only focus on reading memory and reduce its evolution to
the concatenation of new memories or (2) use very specialized memories that
cannot adapt to other domains. This paper presents L2MAC, the first practical
LLM-based stored-program automatic computer (von Neumann architecture)
framework, an LLM-based multi-agent system, for long and consistent output
generation. Its memory has two components: the instruction registry, which is
populated with a prompt program to solve the user-given task, and a file store,
which will contain the final and intermediate outputs. Each instruction in turn
is executed by a separate LLM agent, whose context is managed by a control unit
capable of precise memory reading and writing to ensure effective interaction
with the file store. These components enable L2MAC to generate extensive
outputs, bypassing the constraints of the finite context window while producing
outputs that fulfill a complex user-specified task. We empirically demonstrate
that L2MAC achieves state-of-the-art performance in generating large codebases
for system design tasks, significantly outperforming other coding methods in
implementing the detailed user-specified task, and we provide valuable insights
into the reasons for this performance gap.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要