Chrome Extension
WeChat Mini Program
Use on ChatGLM

GCSA: A New Adversarial Example-Generating Scheme Toward Black-Box Adversarial Attacks.

IEEE Trans. Consumer Electron.(2024)

Cited 0|Views27
No score
Abstract
This paper focuses on the transferability problem of adversarial examples towards black-box attack scenarios wherein model information such as the neural network structure is unavailable. To tackle this predicament, we propose a new adversarial example-generating scheme through bridging a data-modal conversion regime to spawn transferable adversarial examples without referring to the substitute model. Three contributions are mainly involved: i) we figure out an integrated framework to produce transferable adversarial examples through resorting to three components, i.e., image-to-graph conversion, perturbation on converted graph and graph-to-image inversion; ii) upon the conversion from image to graph, we pinpoint critical graph characteristics to implement perturbation using gradient-oriented and optimization-oriented adversarial attacks, then, invert the perturbation on graph into the pixel disturbance correspondingly; iii) multi-facet experiments verify the reasonability and effectiveness with the comparison to three baseline methods. Our work has two novelties: first, without referring to the substitute model, our proposed scheme does not need to acquire any information about the victim model in advance; second, we explore the possibility that inferring the adversarial features of image data through drawing support from network/graph science. In addition, we present three key issues worth deeper discussion, along with these open issues, our work deserves more studies in future.
More
Translated text
Key words
deep learning,adversarial examples,black-box adversarial attack,transferability
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined