Controlling Large Language Model Agents with Entropic Activation Steering
arxiv(2024)
摘要
The generality of pretrained large language models (LLMs) has prompted
increasing interest in their use as in-context learning agents. To be
successful, such agents must form beliefs about how to achieve their goals
based on limited interaction with their environment, resulting in uncertainty
about the best action to take at each step. In this paper, we study how LLM
agents form and act on these beliefs by conducting experiments in controlled
sequential decision-making tasks. To begin, we find that LLM agents are
overconfident: They draw strong conclusions about what to do based on
insufficient evidence, resulting in inadequately explorative behavior. We dig
deeper into this phenomenon and show how it emerges from a collapse in the
entropy of the action distribution implied by sampling from the LLM. We then
demonstrate that existing token-level sampling techniques are by themselves
insufficient to make the agent explore more. Motivated by this fact, we
introduce Entropic Activation Steering (EAST), an activation steering method
for in-context LLM agents. EAST computes a steering vector as an
entropy-weighted combination of representations, and uses it to manipulate an
LLM agent's uncertainty over actions by intervening on its activations during
the forward pass. We show that EAST can reliably increase the entropy in an LLM
agent's actions, causing more explorative behavior to emerge. Finally, EAST
modifies the subjective uncertainty an LLM agent expresses, paving the way to
interpreting and controlling how LLM agents represent uncertainty about their
decisions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要