Toxicity Detection for Free
CoRR(2024)
摘要
Current LLMs are generally aligned to follow safety requirements and tend to
refuse toxic prompts. However, LLMs can fail to refuse toxic prompts or be
overcautious and refuse benign examples. In addition, state-of-the-art toxicity
detectors have low TPRs at low FPR, incurring high costs in real-world
applications where toxic examples are rare. In this paper, we explore
Moderation Using LLM Introspection (MULI), which detects toxic prompts using
the information extracted directly from LLMs themselves. We found significant
gaps between benign and toxic prompts in the distribution of alternative
refusal responses and in the distribution of the first response token's logits.
These gaps can be used to detect toxicities: We show that a toy model based on
the logits of specific starting tokens gets reliable performance, while
requiring no training or additional computational cost. We build a more robust
detector using a sparse logistic regression model on the first response token
logits, which greatly exceeds SOTA detectors under multiple metrics.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要