Language Model Evolution: An Iterated Learning Perspective
CoRR(2024)
Abstract
With the widespread adoption of Large Language Models (LLMs), the prevalence
of iterative interactions among these models is anticipated to increase.
Notably, recent advancements in multi-round self-improving methods allow LLMs
to generate new examples for training subsequent models. At the same time,
multi-agent LLM systems, involving automated interactions among agents, are
also increasing in prominence. Thus, in both short and long terms, LLMs may
actively engage in an evolutionary process. We draw parallels between the
behavior of LLMs and the evolution of human culture, as the latter has been
extensively studied by cognitive scientists for decades. Our approach involves
leveraging Iterated Learning (IL), a Bayesian framework that elucidates how
subtle biases are magnified during human cultural evolution, to explain some
behaviors of LLMs. This paper outlines key characteristics of agents' behavior
in the Bayesian-IL framework, including predictions that are supported by
experimental verification with various LLMs. This theoretical framework could
help to more effectively predict and guide the evolution of LLMs in desired
directions.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined