Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning
CoRR(2024)
摘要
The open-sourcing of large language models (LLMs) accelerates application
development, innovation, and scientific progress. This includes both base
models, which are pre-trained on extensive datasets without alignment, and
aligned models, deliberately designed to align with ethical standards and human
values. Contrary to the prevalent assumption that the inherent
instruction-following limitations of base LLMs serve as a safeguard against
misuse, our investigation exposes a critical oversight in this belief. By
deploying carefully designed demonstrations, our research demonstrates that
base LLMs could effectively interpret and execute malicious instructions. To
systematically assess these risks, we introduce a novel set of risk evaluation
metrics. Empirical results reveal that the outputs from base LLMs can exhibit
risk levels on par with those of models fine-tuned for malicious purposes. This
vulnerability, requiring neither specialized knowledge nor training, can be
manipulated by almost anyone, highlighting the substantial risk and the
critical need for immediate attention to the base LLMs' security protocols.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要