Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
CoRR(2024)
摘要
Aligned large language models (LLMs) are vulnerable to jailbreaking attacks,
which bypass the safeguards of targeted LLMs and fool them into generating
objectionable content. While initial defenses show promise against token-based
threat models, there do not exist defenses that provide robustness against
semantic attacks and avoid unfavorable trade-offs between robustness and
nominal performance. To meet this need, we propose SEMANTICSMOOTH, a
smoothing-based defense that aggregates the predictions of multiple
semantically transformed copies of a given input prompt. Experimental results
demonstrate that SEMANTICSMOOTH achieves state-of-the-art robustness against
GCG, PAIR, and AutoDAN attacks while maintaining strong nominal performance on
instruction following benchmarks such as InstructionFollowing and AlpacaEval.
The codes will be publicly available at
https://github.com/UCSB-NLP-Chang/SemanticSmooth.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要