Random Location Poisoning Backdoor Attack Against Automatic Modulation Classification in Wireless Networks

2023 IEEE/CIC International Conference on Communications in China (ICCC)(2023)

引用 0|浏览1
暂无评分
摘要
The integration of Automatic Modulation Classification (AMC) technology and deep learning has made the technology widely used in applications such as home smart wireless systems, mobile devices, etc. If it is maliciously attacked, it will pose a serious risk to the security of users. Most of the current researches on backdoor attacks are aimed at applications related to computer vision. It has been shown that AMC systems are also prone to backdoor attacks, but their methods do not apply to a signal domain due to the difference between modulated signals and images. In this paper, we propose a backdoor attack method for deep learning-based AMC models, in which the adversary implants a backdoor to the AMC model by poisoning the amplitude at random locations of very little training data and will change its label. In the inference stage, the poisoned model will be misled to a wrong output by poisoned samples containing triggers, while ensuring the normal classification of benign samples. We demonstrate that the method can achieve a 96.7% attack success rate by infecting only 1% of the training set samples without degrading the benign accuracy rate. Since the trigger locations are randomly selected for each sample, the attack concealment is further improved, while the degree of discrepancy between the temporal waveforms before and after the attack is quantitatively evaluated.
更多
查看译文
关键词
Automatic Modulation Classification,deep learning,backdoor attacks,and random locations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要