Exploring Safety Generalization Challenges of Large Language Models via Code
arxiv(2024)
摘要
The rapid advancement of Large Language Models (LLMs) has brought about
remarkable capabilities in natural language processing but also raised concerns
about their potential misuse. While strategies like supervised fine-tuning and
reinforcement learning from human feedback have enhanced their safety, these
methods primarily focus on natural languages, which may not generalize to other
domains. This paper introduces CodeAttack, a framework that transforms natural
language inputs into code inputs, presenting a novel environment for testing
the safety generalization of LLMs. Our comprehensive studies on
state-of-the-art LLMs including GPT-4, Claude-2, and Llama-2 series reveal a
common safety vulnerability of these models against code input: CodeAttack
consistently bypasses the safety guardrails of all models more than 80
time. Furthermore, we find that a larger distribution gap between CodeAttack
and natural language leads to weaker safety generalization, such as encoding
natural language input with data structures or using less popular programming
languages. These findings highlight new safety risks in the code domain and the
need for more robust safety alignment algorithms to match the code capabilities
of LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要