Adversarial Attacks in Modulation Recognition With Convolutional Neural Networks

IEEE Transactions on Reliability(2021)

引用 95|浏览35
暂无评分
摘要
Deep learning (DL) models are vulnerable to adversarial attacks, by adding a subtle perturbation which is imperceptible to the human eye, a convolutional neural network (CNN) can lead to erroneous results, which greatly reduces the reliability and security of the DL tasks. Considering the wide application of modulation recognition in the communication field and the rapid development of DL, by adding a well-designed adversarial perturbation to the input signal, this article explores the performance of attack methods on modulation recognition, measures the effectiveness of adversarial attacks on signals, and provides the empirical evaluation of the reliabilities of CNNs. The results indicate that the accuracy of the target model reduce significantly by adversarial attacks, when the perturbation factor is 0.001, the accuracy of the model could drop by about 50% on average. Among them, iterative methods show greater attack performances than that of one-step method. In addition, the consistency of the waveform before and after the perturbation is examined, to consider whether the added adversarial examples are small enough (i.e., hard to distinguish by human eyes). This article also aims at inspiring researchers to further promote the CNNs reliabilities against adversarial attacks.
更多
查看译文
关键词
Adversarial examples,convolutional neural network (CNN),modulation recognition,radio security,white-box attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要