Detecting Adversarial Spectrum Attacks via Distance to Decision Boundary Statistics
CoRR(2024)
摘要
Machine learning has been adopted for efficient cooperative spectrum sensing.
However, it incurs an additional security risk due to attacks leveraging
adversarial machine learning to create malicious spectrum sensing values to
deceive the fusion center, called adversarial spectrum attacks. In this paper,
we propose an efficient framework for detecting adversarial spectrum attacks.
Our design leverages the concept of the distance to the decision boundary (DDB)
observed at the fusion center and compares the training and testing DDB
distributions to identify adversarial spectrum attacks. We create a
computationally efficient way to compute the DDB for machine learning based
spectrum sensing systems. Experimental results based on realistic spectrum data
show that our method, under typical settings, achieves a high detection rate of
up to 99% and maintains a low false alarm rate of less than 1%. In addition,
our method to compute the DDB based on spectrum data achieves 54%–64%
improvements in computational efficiency over existing distance calculation
methods. The proposed DDB-based detection framework offers a practical and
efficient solution for identifying malicious sensing values created by
adversarial spectrum attacks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要