Boosting the Transferability of Ensemble Adversarial Attack via Stochastic Average Variance Descent

Guowen Xu,Lei Zhao, Zhizhi Liu, Sixing Wu, Wei Chen,Liwen Wu,Bin Pu,Shaowen Yao

IET Information Security(2024)

Cited 0|Views3
No score
Abstract
Adversarial examples have the property of transferring across models, which has created a great threat for deep learning models. To reveal the shortcomings in the existing deep learning models, the method of the ensemble has been introduced to the generating of transferable adversarial examples. However, most of the model ensemble attacks directly combine the different models’ output but ignore the large differences in optimization direction of them, which severely limits the transfer attack ability. In this work, we propose a new kind of ensemble attack method called stochastic average ensemble attack. Unlike the existing approach of averaging the outputs of each model as an integrated output, we continuously optimize the ensemble gradient in an internal loop using the model history gradient and the average gradient of different models. In this way, the adversarial examples can be updated in a more appropriate direction and make the crafted adversarial examples more transferable. Experimental results on ImageNet show that our method generates highly transferable adversarial examples and outperforms existing methods.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined