MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)(2018)

引用 22|浏览72
暂无评分
摘要
Some recent work revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN's resilience to adversarial attacks, namely, adversarial training. Our experiments show that different adversarial strengths, i.e., perturbation levels of adversarial examples, have different working ranges to resist the attacks. Based on the observation, we propose a multi-strength adversarial training method (MAT) that combines the adversarial training examples with different adversarial strengths to defend adversarial attacks. Two training structures-mixed MAT and parallel MAT-are developed to facilitate the tradeoffs between training time and hardware cost. Our results show that MAT can substantially minimize the accuracy degradation of deep learning systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN. The tradeoffs between training time, robustness, and hardware cost are also well discussed on a FPGA platform.
更多
查看译文
关键词
neural network,adversarial example,adversarial attack,adversarial training,FPGA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要