Robust and privacy-preserving federated learning with distributed additive encryption against poisoning attacks

Fan Zhang,Hui Huang,Zhixiong Chen, Zhenjie Huang

COMPUTER NETWORKS(2024)

Cited 0|Views0
No score
Abstract
Privacy -preserving federated learning (PPFL) enables collaborative model training across multiple parties while protecting the privacy of sensitive data. However, PPFL is vulnerable to poisoning attacks, as the indistinguishability of ciphertext allows maliciously crafted gradients to bypass existing defense strategies. Currently, privacy -preserving defense strategies have been proposed to resist poisoning attacks by identifying anomalous gradients under ciphertext. Specifically, these schemes protect privacy by masking the gradient during detection. However, existing schemes come at the cost of reduced security since participants may collude to obtain the mask and then compromise user privacy. In this paper, we propose a robust -enhanced federated learning (REFL) framework to identify malicious gradients over ciphertext and enhance model trustworthiness in scenarios without a trusted entity. Specifically, we design a threshold -based secret generation technology that prevents any single entity from accessing the mask and the private key. Furthermore, We develop a secure consensus technique based on cosine similarity for the identification of maliciously encrypted gradients, enabling Byzantine fault -tolerant aggregation. Finally, we evaluated its defense performance against two backdoor poisoning attacks on the real dataset and compared its computational cost with the Paillier-based defense strategy. The experimental results demonstrate that REFL performs better than the baseline.
More
Translated text
Key words
Federated learning,Poisoning defense,Homomorphic encryption,Distributed key generation,Consensus
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined