Model poisoning attack in differential privacy-based federated learning.

Inf. Sci.(2023)

Cited 4|Views38
No score
Abstract
Although federated learning can provide privacy protection for individual raw data, some studies have shown that the shared parameters or gradients under federated learning may still reveal user privacy. Differential privacy is a promising solution to the above problem due to its small computational overhead. At present, differential privacy-based federated learning generally focuses on the trade-off between privacy and model convergence. Even though differential privacy obscures sensitive information by adding a controlled amount of noise to the confidential data, it opens a new door for model poisoning attacks: attackers can use noise to escape anomaly detection. In this paper, we propose a novel model poisoning attack called Model Shuffle Attack (MSA), which designs a unique way to shuffle and scale the model parameters. If we treat the model as a black box, it behaves like a benign model on test set. Unlike other model poisoning attacks, the malicious model after MSA has high accuracy on test set while reducing the global model convergence speed and even causing the model to diverge. Extensive experiments show that under FedAvg and robust aggregation rules, MSA is able to significantly degrade performance of the global model while guaranteeing stealthiness.
More
Translated text
Key words
Privacy-preserving,Federated learning,Differential privacy,Model poisoning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined