Effects of Quantization on Federated Learning with Local Differential Privacy

GLOBECOM(2022)

引用 0|浏览8
暂无评分
摘要
Federated learning (FL) enables large-scale machine learning with user data privacy due to its decentralized structure. However, the user data can still be inferred via the shared model updates. To strengthen the privacy, we consider FL with local differential privacy (LDP). One of the challenges in FL is its huge communication cost caused by iterative transmissions of model updates. It has been relieved by quantization in the literature, however, there have been not many works that consider its effect on LDP and the unboundedness of the randomized model updates. We propose a communication-efficient FL algorithm with LDP that uses a Gaussian mechanism followed by quantization and the Elias-gamma coding. A novel design of the algorithm guarantees LDP even after the quantization. Under the proposed algorithm, we provide a trade-off analysis of privacy and communication costs theoretically: quantization reduces the communication costs but requires a larger perturbation to enable LDP. Experimental results show that the accuracy is mostly affected by the noise from LDP mechanisms, and it becomes enhanced when the quantization error is larger. Nonetheless, our experimental results enabled LDP with a significant compression ratio and only a slight reduction of accuracy in return. Furthermore, the proposed algorithm outperforms another algorithm with a discrete Gaussian mechanism under the same privacy budget and communication costs constraints in the experiments.
更多
查看译文
关键词
local differential privacy,federated learning,quantization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要