Chrome Extension
WeChat Mini Program
Use on ChatGLM

DP-GSGLD: A Bayesian optimizer inspired by differential privacy defending against privacy leakage in federated learning

Chengyi Yang, Kun Jia,Deli Kong,Jiayin Qi, Aimin Zhoua

COMPUTERS & SECURITY(2024)

Cited 0|Views11
No score
Abstract
Stochastic Gradient Langevin Dynamics (SGLD) is believed to preserve differential privacy as its intrinsic attribute since it obtain randomness from posterior sampling and natural noise. In this paper, we propose Differentially Private General Stochastic Gradient Langevin Dynamics (DP-GSGLD), a novel variant of SGLD which realizes gradient estimation in parameter updating through Bayesian sampling. We introduce the technique of parameter clipping and prove that DP-GSGLD satisfies the property of Differential Privacy (DP). We conduct experiments on several image datasets for defending against gradient attack that is commonly appeared in the scenario of federated learning. The results demonstrate that DP-GSGLD can decrease the time for model training and achieve higher accuracy under the same privacy level.
More
Translated text
Key words
Differential privacy,Stochastic gradient Langevin dynamics,Bayesian learning,Deep learning optimizer
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined