AdvCGAN: An Elastic and Covert Adversarial Examples Generating Framework

2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2021)

引用 2|浏览19
暂无评分
摘要
Recently, a new methodology using generative adversarial network (GAN) has been proposed to produce adversarial examples, which breaks the limitations of the previous methods dependent on different norm-levels. It can efficiently generate perturbations for any instance once the generator is trained, arising from the learning to approximate the distribution of real instances. However, there are still two shortcomings for this category of GAN-based method: i) the predicted label in attacking stage totally depend on a fixed or randomly-chosen label in training stage, which cannot tackle the elasticity problem on how to elastically produce adversarial example with any arbitrarily-assigned label in targeted attack scene when the generator has finished training; and ii) it only considering the produced adversarial example is as close as the real instances, which cannot guarantee the generated adversarial example is visually indistinguishable from its corresponding original instance perceptually. The aboved two disadvantages make this kind of method lack of flexibility and covertness. To circumvent these two predicaments, we in this paper propose a simple and easy-to-use adversarial example generating framework AdvCGAN through training a conditional generative adversarial network under the co-consideration on the similarities in data distributions and the image labels between the adversarial examples and the original instances to be imperceptible to humans. Concretely, our proposed AdvCGAN trains the conditional GAN with both image data and label (normal and attack) information, by which the generator can utilizing the guidance of label information to appropriately produce the adversarial example with any specific label in attacking stage. Extensive experiments using the commonly used MNIST and CIFAR-10 datasets show that our proposed AdvCGAN significantly outperforms other methods in terms of multi-facet evaluation. The results exhibit that our AdvCGAN can elastically produce more realistic adversarial examples with any arbitrarily-assigned attack label and achieve higher attack accuracy, especially in targeted attack.
更多
查看译文
关键词
AdvCGAN,attacking stage,produced adversarial example,generated adversarial example,conditional generative adversarial network,realistic adversarial examples,covert adversarial examples generating framework,randomly-chosen label,MNIST,CIFAR-10,GAN-based method,targeted attack,elasticity problem
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要