Attacking Modulation Recognition With Adversarial Federated Learning in Cognitive-Radio-Enabled IoT

IEEE INTERNET OF THINGS JOURNAL(2024)

Cited 0|Views1
No score
Abstract
Internet of Things (IoT) based on cognitive radio (CR) exhibits strong dynamic sensing and intelligent decision-making capabilities by effectively utilizing spectrum resources. The federal learning (FL) framework-based modulation recognition (MR) is an essential component, but its use of uninterpretable deep learning (DL) introduces security risks. This article combines traditional signal interference methods and data poisoning in FL to propose a new adversarial attack approach. The poisoning attack in distributed frameworks manipulates the global model by controlling malicious users, which is not only covert but also highly impactful. The carefully designed pseudo-noise in MR is also extremely difficult to detect. The combination of these two techniques can generate a greater security threat. We have further advanced our proposal with the introduction of the new adversarial attack method called "chaotic poisoning attack" to reduce the recognition accuracy of the FL-based MR system. We establish effective attack conditions, and simulation results demonstrate that our method can cause a decrease of approximately 80% in the accuracy of the local model under weak perturbations and a decrease of around 20% in the accuracy of the global model. Compared to white-box attack methods, our method exhibits superior performance and transferability.
More
Translated text
Key words
Adversarial attack,cognitive radio (CR),federated learning (FL),modulation recognition (MR)
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined