谷歌浏览器插件
订阅小程序
在清言上使用

Data Poisoning Attacks and Mitigation Strategies on Federated Support Vector Machines

SN Computer Science(2024)

引用 0|浏览1
暂无评分
摘要
Federated learning is a machine learning approach where multiple edge devices, each holding local data samples, send a locally trained model to the central server, and the central server aggregates the models using a specific aggregation rule. Notably, the distributed nature of federated learning exposes these devices to potential poisoning attacks, especially during the training phase. This paper presents a systematic study on the effect of data poisoning attacks against SVM classifiers in a federated setting (F-SVM). In particular, we implement two widely recognized data poisoning attacks against SVMs named Label-Flipping and Optimal-Poisoning attacks and evaluate their impact on the global F-SVM accuracy using MNIST, FashionMNIST, CIFAR-10, and IJCNN1 datasets. Our results reveal significant reductions in accuracy, highlighting the susceptibility of F-SVMs to such attacks. Our empirical results highlight that if 30% of the edge devices are compromised, accuracy drops by 15%, and if compromised devices increase to 35%, accuracy goes down by 32%. We evaluated the impacts when ratio of the poisonous points is different and when datasets are not independently and identically distributed (non-IID) across edge devices. In addition to this, we investigate some preliminary defense mechanisms against poisoning attacks for F-SVMs. Consequently, we assessed the efficacy of three popular unsupervised outlier detection methods: the K -nearest Neighbor algorithm, Histogram-based outlier detection, and Copula-based outlier detection. All our source codes are written in Python and are open source.
更多
查看译文
关键词
Support vector machine,Poisoning attack,Outlier detection,Federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要