A Method for Adversarial Example Generation Using Wavelet Transformation

AHFE international(2023)

引用 0|浏览2
暂无评分
摘要
With the advance of Deep Neural Networks (DNN), the accuracy of various tasks in machine learning has dramatically improved. Image classification is one of the most typical tasks. However, various papers have pointed out the vulnerability of DNN.It is known that small changes to an image can easily makes the DNN model misclassify it. The images with such small changes are called adversarial examples. This vulnerability of DNN is a major problem in practical image recognition. There have been researches on the methods to generate adversarial examples and researches on the methods to defense DNN models not to be fooled by adversarial example. In addition, the transferability of the adversarial example can be used to easily attack a model in a black-box attack situation. Many of the attack methods used techniques to add perturbations to images in the spatial domain. However, we focus on the spatial frequency domain and propose a new attack method.Since the low-frequency component is responsible for the overall tendency of color distributions in the images, it is easy to see the change if modified. On the other hand, the high-frequency component of an image holds less information than the low-frequency component. Even if it is changed, the change is less apparent in the appearance of the image. Therefore, it is difficult to perceive an attack on the high-frequency component at a glance, which makes it easy to attack. Thus, by adding perturbation to the high-frequency components of the images, we can expect to generate adversarial examples that appear similar to the original image with human eyes.R. Duan et al. used a discrete cosine transformation for images when focusing on the spatial frequency domain. This was a method by use of quantization, which drops the information that DNN models would have extracted. However, this method has the disadvantage that block-like noise appears in a resultant image because the target image is separated by 8 × 8 to apply the discrete cosine transformation. In order to avoid such disadvantage, we propose a method which applies the wavelet transformation to target images. Reduction of the information in the high-frequency component changes the image with the perturbation that is not noticeable, which results in a smaller change of the image than previous studies. For experiments, the peak signal to noise ratio (PSNR) was used to quantify how much the image was degraded from the original image. In our experiments, we compared the results of our method with different learning rates used to generate perturbations with the previous study and found that the maximum learning rate of our method was about 43, compared to about 32 in the previous study. Unlike previous studies, the attached success rate was also improved without using quantization: our method improved attack accuracy by about 9% compared to the previous work.
更多
查看译文
关键词
adversarial example generation,wavelet transformation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要