MultiAgent Collaboration Attack: Investigating Adversarial Attacks in Large Language Model Collaborations via Debate
arxiv(2024)
摘要
Large Language Models (LLMs) have shown exceptional results on current
benchmarks when working individually. The advancement in their capabilities,
along with a reduction in parameter size and inference times, has facilitated
the use of these models as agents, enabling interactions among multiple models
to execute complex tasks. Such collaborations offer several advantages,
including the use of specialized models (e.g. coding), improved confidence
through multiple computations, and enhanced divergent thinking, leading to more
diverse outputs. Thus, the collaborative use of language models is expected to
grow significantly in the coming years. In this work, we evaluate the behavior
of a network of models collaborating through debate under the influence of an
adversary. We introduce pertinent metrics to assess the adversary's
effectiveness, focusing on system accuracy and model agreement. Our findings
highlight the importance of a model's persuasive ability in influencing others.
Additionally, we explore inference-time methods to generate more compelling
arguments and evaluate the potential of prompt-based mitigation as a defensive
strategy.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要