Subtoxic Questions: Dive Into Attitude Change of LLM's Response in Jailbreak Attempts
CoRR(2024)
摘要
As Large Language Models (LLMs) of Prompt Jailbreaking are getting more and
more attention, it is of great significance to raise a generalized research
paradigm to evaluate attack strengths and a basic model to conduct subtler
experiments. In this paper, we propose a novel approach by focusing on a set of
target questions that are inherently more sensitive to jailbreak prompts,
aiming to circumvent the limitations posed by enhanced LLM security. Through
designing and analyzing these sensitive questions, this paper reveals a more
effective method of identifying vulnerabilities in LLMs, thereby contributing
to the advancement of LLM security. This research not only challenges existing
jailbreaking methodologies but also fortifies LLMs against potential exploits.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要