谷歌浏览器插件
订阅小程序
在清言上使用

Targeted Defense Against Neuron-Exclusivity-Based Gradient Inversion Attack in Federated Learning.

2023 IEEE 8th International Conference on Smart Cloud (SmartCloud)(2023)

引用 0|浏览4
暂无评分
摘要
As a distributed machine learning approach, federated learning enables multiple clients to collaboratively train a deep learning model for a common artificial intelligence task and share their gradients only. However, recent gradient inversion attacks demonstrate the possibility to reconstruct clients' training data from shared gradients and thus pose a severe threat to the privacy of federated learning. In this paper, we focus on the neuron-exclusivity-based gradient inversion attack, which is the first analytic attack based on the neuron exclusivity state. Since the key condition of sufficient exclusivity is required to construct the neuron-exclusivity-based attack, we propose a batch-perturbation-based targeted defense aiming to eliminate the exclusivity state of the training batches. The batch perturbation algorithm can be modeled as an optimization problem that aims to find the optimal perturbation on an input batch to satisfy the secure boundary condition. Then we transform the optimization problem into a linear programming problem and solve it with PuLP. We evaluate our proposed defense on two datasets: MNIST and OrganAMNIST. The experiment results demonstrate that our proposed defense can effectively prevent the neuron-exclusivity-based attack while having almost no negative impact on model training and model performance.
更多
查看译文
关键词
targeted defense,batch perturbation,neuron exclusivity,gradient inversion attack,federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要