Learning to aggregate: A parameterized aggregator to debias aggregation for cross-device federated learning

ICLR 2023(2023)

引用 0|浏览66
暂无评分
摘要
Federated learning (FL) emerged as a novel machine learning setting that enables collaboratively training deep models on decentralized private data. Due to the heterogeneity (non-iidness) of the decentralized data, FL methods (e.g. FedAvg) suffers from unstable and slow convergence. Recent works explain the non-iid problem in FL as the client drift, and deal with it by enforcing regularization at local updates. However, these works neglect the heterogeneity among different communication rounds: the data of sampled candidates at different communication rounds are also of non-iid distribution, and we term it as period drift, which as well as client drift can lead to aggregation bias that degrade convergence. To deal with it, we propose a novel aggregation strategy, named FedPA, that uses a Parameterized Aggregator, as an alternative of averaging. We frame FedPA within a meta-learning setting, and formulates the aggregator as a meta-learner, to learn to aggregate the model parameters of clients. FedPA can directly learn the aggregation bias and well calibrate and control the direction of aggregated parameters to a better direction towards the optimum. Experiments show that FedPA can achieve competitive performances compared with conventional baselines.
更多
查看译文
关键词
Federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要