Combining homomorphic encryption and differential privacy in federated learning

2023 20TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST, PST(2023)

引用 0|浏览5
暂无评分
摘要
Recent works have investigated the relevance and practicality of using techniques such as Differential Privacy (DP) or Homomorphic Encryption (HE) to strengthen training data privacy in the context of Federated Learning protocols. As these two techniques cover different sources of confidentiality threats (other clients/end-users for the former, aggregation server for the latter), there is a need to consistently combine them in order to bridge the gap towards more realistic deployment scenarios. In this paper, we achieve that goal by means of a novel stochastic quantization operator which allows us to establish DP guarantees when the noise is both quantized and bounded due to the use of HE. The paper is concluded by experiments on the FEMNIST dataset which show that the precision required to get state-of-the art privacy/utility trade-off (which directly impacts HE parameters and, hence, HE operations performances) results in a computation time overhead between 0.2% and 1.1% imputable to HE (depending on the key setup, either single key or threshold), for the whole training of a 500k parameters model and state-of-the-art privacy/utility trade-off.
更多
查看译文
关键词
Federated learning,differential privacy,homomorphic encryption,quantization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要