Exploring Adversarial Attacks on Learning-based Localization.

WiseML@WiSec(2023)

Cited 0|Views39
No score
Abstract
We investigate the robustness of a convolutional neural network (CNN) RF transmitter localization model in the face of adversarial actors which may poison or spoof sensor data to disrupt or defeat the algorithm. We train the CNN to estimate transmitter locations based on sensor coordinates and received signal strength (RSS) measurements from a real-world dataset. We consider attacks from adversaries with varying capabilities to include naive, random attacks and omniscient, worst-case attacks. We apply countermeasures based on statistical outlier approaches and train the CNN against adversarial attacks to improve performance. Adversarial training is shown to completely neutralize some attacks and improve accuracy by up to 65% in other cases. Our evaluation of countermeasures indicates that a combination of statistical techniques and adversarial training can provide more robust defense against adversarial attacks.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined