LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks.

AsiaCCS(2023)

引用 0|浏览25
暂无评分
摘要
Federated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on selected samples and share the gradients with the server (i.e., poisoning). Since honest clients perform gradient descent on samples locally, an adversary can distinguish whether the attacked sample is a training sample based on observation of the change of the sample's prediction. This type of attack exacerbates traditional passive MIA, yet the defense mechanisms remain largely unexplored. In this work, we first investigate the effectiveness of the existing server-side robust aggregation algorithms (AGRs), designed to counter general poisoning attacks, in defending against poisoning MIA. We find that they are largely insufficient in mitigating poisoning MIA, as it targets specific victim samples and has minimal impact on model performance, unlike general poisoning. Thus, we propose a new client-side defense mechanism, called LoDen, which leverages the clients' unique ability to detect any suspicious privacy attacks. We theoretically quantify the membership information leaked to the poisoning MIA and provide a bound for this leakage in LoDen. We perform an extensive experimental evaluation on four benchmark datasets against poisoning MIA, comparing LoDen with six state-of-the-art server-side AGRs. LoDen consistently achieves 0% missing rate in detecting poisoning MIA across all settings, and reduces the poisoning MIA success rate to 0% in most cases. The code of LoDen is available at https://github.com/UQ- Trust-Lab/LoDen.
更多
查看译文
关键词
Federated learning, membership inference attack, privacy leakage
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要