Fooling AI with AI: An Accelerator for Adversarial Attacks on Deep Learning Visual Classification

2019 IEEE 30th International Conference on Application-specific Systems, Architectures and Processors (ASAP)(2019)

Cited 10|Views24
No score
Abstract
Recent studies identify that Deep learning Neural Networks (DNNs) are vulnerable to subtle perturbations, which are not perceptible to the human visual system but can fool the DNN models and lead to wrong outputs. These algorithms are the first efforts to move forward to secure deep learning by providing an avenue to train future defense networks. We propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays. Our design significantly improves the throughput of a visual adversarial perturbation system, which can further improve the robustness and security of future deep learning systems. Based on the algorithm uniqueness, we propose four implementations for the adversarial attack accelerator (A^3) to improve the throughput, energy efficiency, and computational efficiency.
More
Translated text
Key words
Deep Learning Visual Classification, Hardware Accelerator, Adversarial Attacks, Memristor Crossbar
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined