Studying the Robustness of Anti-Adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING(2024)

Cited 2|Views8
No score
Abstract
Device fingerprinting combined with Machine and Deep Learning (ML/DL) report promising performance when detecting spectrum sensing data falsification (SSDF) attacks. However, the amount of data needed to train models and the scenario privacy concerns limit the applicability of centralized ML/DL. Federated learning (FL) addresses these drawbacks but is vulnerable to adversarial participants and attacks. The literature has proposed countermeasures, but more effort is required to evaluate the performance of FL detecting SSDF attacks and their robustness against adversaries. Thus, the first contribution of this work is to create an FL-oriented dataset modeling the behavior of resource-constrained spectrum sensors affected by SSDF attacks. The second contribution is a pool of experiments analyzing the robustness of FL models according to i) three families of sensors, ii) eight SSDF attacks, iii) four FL scenarios dealing with anomaly detection and binary classification, iv) up to 33% of participants implementing data and model poisoning attacks, and v) four aggregation functions acting as anti-adversarial mechanisms. In conclusion, FL achieves promising performance when detecting SSDF attacks. Without anti-adversarial mechanisms, FL models are particularly vulnerable with >16% of adversaries. Coordinate-wise-median is the best mitigation for anomaly detection, but binary classifiers are still affected with >33% of adversaries.
More
Translated text
Key words
Resource-constrained devices,cyberattacks,fingerprinting,federated learning,adversarial attacks,robustness
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined