CPPer-FL: Clustered Parallel Training for Efficient Personalized Federated Learning

IEEE Transactions on Mobile Computing(2024)

引用 0|浏览4
暂无评分
摘要
In this paper, a clustered parallel training algorithm is designed for personalized federated learning (Per-FL), called CPPer-FL. CPPer-FL improves the communication and training efficiency of Per-FL from two perspectives, namely, less burden for the central server and lower interaction idling delay. CPPer-FL adopts a client-edge-center learning architecture, which offloads the central server's model aggregation and communication burden to distributed edge servers. Also, CPPer-FL redesigns the cascading model synchronization and updating procedure in conventional Per-FL and changes it to a parallel manner, thus improving the interaction efficiency in the training process. Further, for the proposed hierarchical architecture, two approaches are proposed to cater to Per-FL: similarity-based clustering for client-edge association and personalized model aggregation for parallel model updating, such that clients' personal features can be preserved in the training process. The convergence of CPPer-FL has been formally analyzed and proved. Evaluation results validate the communication efficiency, model convergence, and model accuracy improvement.
更多
查看译文
关键词
Communication and training efficient federated learning,parallel model synchronization and updating,personalized federated learning,similarity clustering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要