Fedlabx: a practical and privacy-preserving framework for federated learning

Yuping Yan,Mohammed B. M. Kamel, Marcell Zoltay, Marcell Gal, Roland Hollos,Yaochu Jin,Ligeti Peter,Akos Tenyi

COMPLEX & INTELLIGENT SYSTEMS(2024)

Cited 0|Views15
No score
Abstract
Federated learning (FL) draws attention in academia and industry due to its privacy-preserving capability in training machine learning models. However, there are still some critical security attacks and vulnerabilities, including gradients leakage and interference attacks. Meanwhile, communication is another bottleneck in basic FL schemes since large-scale FL parameter transmission leads to inefficient communication, latency, and slower learning processes. To overcome these shortcomings, different communication efficiency strategies and privacy-preserving cryptographic techniques have been proposed. However, a single method can only partially resist privacy attacks. This paper presents a practical, privacy-preserving scheme combining cryptographic techniques and communication networking solutions. We implement Kafka for message distribution, the Diffie-Hellman scheme for secure server aggregation, and gradient differential privacy for interference attack prevention. The proposed approach maintains training efficiency while being able to addressing gradients leakage problems and interference attacks. Meanwhile, the implementation of Kafka and Zookeeper provides asynchronous communication and anonymous authenticated computation with role-based access controls. Finally, we prove the privacy-preserving properties of the proposed solution via security analysis and empirically demonstrate its efficiency and practicality.
More
Translated text
Key words
Federated learning,Kafka,Secure aggregation,Differential privacy
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined