Chrome Extension
WeChat Mini Program
Use on ChatGLM

An LLM Can Fool Itself: A Prompt-Based Adversarial Attack

ICLR 2024(2024)

Cited 30|Views109
Key words
large language model,adversarial attack,adversarial robustness
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined