Generating Adversarial Examples with Distance Constrained Adversarial Imitation Networks

IEEE Transactions on Dependable and Secure Computing(2021)

引用 9|浏览14
暂无评分
摘要
Recent studies have shown that neural networks are vulnerable to adversarial examples that are designed by adding small perturbations to clean examples in order to trick the classifier to misclassify. Various approaches based on optimization have been proposed for generating adversarial examples with minimal perturbation. Model training based methods such as Adversarial Transformation Network (ATN) provide a fundamentally new way to directly transform an input into an adversarial example, which promises fast generation of adversarial examples. However, the adversarial examples may have suboptimal quality with significantly large perturbations or low attack success rate at small perturbations. In this paper, we propose a distance constrained Adversarial Imitation Network (AIN), which enhances ATN and is capable of generating both targeted and untargeted examples with an explicit distance constraint. AIN can not only generate large scale adversarial examples efficiently as achieved in ATN, but also imitate the behavior of state-of-the-art optimization-based methods, hence achieving improved quality. Extensive experiments show that AIN significantly outperforms ATN and other Generative Adversarial Networks (GAN) based methods in the quality of generated adversarial examples, and is much more efficient than optimization based methods while achieving comparable quality.
更多
查看译文
关键词
Adversarial examples,distance constrained,imitation,attack,neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要