FLBooster: A Unified and Efficient Platform for Federated Learning Acceleration.

ICDE(2023)

引用 1|浏览17
暂无评分
摘要
Federated learning (FL) has emerged as a paradigm to train a global machine learning model in a distributed manner while taking privacy concerns and data protection regulations into consideration. Although a variety of FL algorithms have been proposed, the training efficiency of FL remains challenging due to massive mathematical computations and expensive client-server communication costs. However, existing FL-acceleration studies are limited as they can only solve the computation and communication overheads separately, which is suboptimal and constrains their acceleration ability. Moreover, previous studies are typically designed for specific FL scenarios and can support only one or two FL models, thus exhibiting poor generality.To fill these critical voids, we propose FLBooster, which provides unified and efficient acceleration capacity for a broad range of FL models. This is the first proposal to solve the computation and communication overheads simultaneously. Specifically, we utilize GPUs to boost the computation-intensive homomorphic encryption (HE) operations in a parallel manner, which significantly reduces the computation costs. On the other hand, a simple but efficient compression method is designed to lighten the exchange of data volumes between client and server. Extensive experiments using four standard FL models on three datasets show that FLBooster acquires superior speed-up gains (i.e., 14.3× – 138×) over state-of-the-art acceleration systems. Finally, we integrate FLBooster into the open-source FL benchmark FATE and offer user-friendly APIs for development.
更多
查看译文
关键词
Federated learning,homomorphic encryption,GPU acceleration,efficient communication
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要