Poisoning-Assisted Property Inference Attack Against Federated Learning

IEEE Transactions on Dependable and Secure Computing(2022)

引用 18|浏览6
暂无评分
摘要
Federated learning (FL) has emerged as an ideal privacy-preserving learning technique which can train a global model in a collaborative way while preserving the private data in the local. However, recent advances have demonstrated that FL is still vulnerable to inference attacks, such as reconstruction attack and membership inference. Among these attacks, the property inference attack, aiming to infer properties of the training data that are irrelevant with the learning objective, has not received too much attention while resulting in severe privacy leakage. Existing property inference attack approaches either cannot achieve satisfactory performance when the global model has converged or under dynamic FL where participants can drop in and drop out freely. In this paper, we propose a novel poisoning-assisted property inference attack (PAPI-attack) against FL. The key insight is that there exists underlying discriminative ability in the periodic model updates, which reflects the change of the data distribution, especially the occurrence of the sensitive property. Thus, a binary attack model can be constructed by a malicious participant for inferring the unintended information. More importantly, we present a property-specific poisoning mechanism by modifying the label of training data from the adversary to distort the decision boundary of shared (global) model in FL. Consequently, benign participants are induced to disclose more information about the sensitive property. Extensive experiments on real-world datasets demonstrate that PAPI-attack outperforms the state-of-the-art property inference attacks against FL.
更多
查看译文
关键词
federated learning,property,poisoning-assisted
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要