Data Poisoning for In-context Learning
CoRR(2024)
摘要
In the domain of large language models (LLMs), in-context learning (ICL) has
been recognized for its innovative ability to adapt to new tasks, relying on
examples rather than retraining or fine-tuning. This paper delves into the
critical issue of ICL's susceptibility to data poisoning attacks, an area not
yet fully explored. We wonder whether ICL is vulnerable, with adversaries
capable of manipulating example data to degrade model performance. To address
this, we introduce ICLPoison, a specialized attacking framework conceived to
exploit the learning mechanisms of ICL. Our approach uniquely employs discrete
text perturbations to strategically influence the hidden states of LLMs during
the ICL process. We outline three representative strategies to implement
attacks under our framework, each rigorously evaluated across a variety of
models and tasks. Our comprehensive tests, including trials on the
sophisticated GPT-4 model, demonstrate that ICL's performance is significantly
compromised under our framework. These revelations indicate an urgent need for
enhanced defense mechanisms to safeguard the integrity and reliability of LLMs
in applications relying on in-context learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要