Towards improving fast adversarial training in multi-exit network

Neural Networks(2022)

引用 7|浏览5
暂无评分
摘要
Adversarial examples are usually generated by adding adversarial perturbations on clean samples, designed to deceive the model to make wrong classifications. Adversarial robustness refers to the ability of a model to resist adversarial attacks. And currently, a mainstream method to enhance adversarial robustness is the Projected Gradient Descent (PGD). However, PGD is often criticized for being time-consuming during constructing adversarial examples. Fast adversarial training can improve the adversarial robustness in shorter time, but it only can train for a limited number of epochs, leading to sub-optimal performance. This paper demonstrates that the multi-exit network can reduce the impact of adversarial perturbations by outputting easily identified samples at early exits. Therefore, we can improve the adversarial robustness. Further, we find that the multi-exit network can prevent catastrophic overfitting existing in single-step adversarial training. Specifically, we find that, in the multi-exit network, (1) the norm of weights at a fully connected layer in a non-overfitted exit is much smaller than that in an overfitted exit; and (2) catastrophic overfitting occurs when the late exits have weight norms larger than the early exits. Based on these findings, we propose an approach to alleviating the catastrophic overfitting of the multi-exit network. Compared to PGD adversarial training, our approach can train a model with decreased time complexity and increased empirical robustness. Extensive experiments have been conducted to evaluate our approach against various adversarial attacks, and the experimental results demonstrate superior robustness accuracies on CIFAR-10, CIFAR-100 and SVHN.
更多
查看译文
关键词
Adversarial robustness,Adversarial defense,Fast adversarial training,Multi-exit network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要