Peer-to-Peer Variational Federated Learning over Arbitrary Graphs

IEEE Journal on Selected Areas in Information Theory(2022)

引用 2|浏览6
暂无评分
摘要
This paper proposes a federated supervised learning framework over a general peer-to-peer network with agents that act in a variational Bayesian fashion. The proposed framework consists of local agents where each of which keeps a local “posterior probability distribution” over the parameters of a global model; the updating of the posterior over time happens in a local fashion according to two subroutines of: 1) variational model training given (a batch of) local labeled data, and 2) asynchronous communication and model aggregation with the 1-hop neighbors. Inspired by the popular federated learning (model averaging), the framework allows the training data to remain distributed on mobile devices while utilizing a peer-to-peer model aggregation in a social network. The proposed framework is shown to allow for a systematic treatment of model aggregation over any arbitrary connected graph with consistent (in general, non-iid) local labeled data. Specifically, under mild technical conditions, the proposed algorithm allows agents with local data to learn a shared model explaining the global training data in a decentralized fashion over an arbitrary peering/connectivity graph. Furthermore, the rate of convergence is characterized and shown to be a function of each individual agent’s data quality weighted by its eigenvector centrality. Empirically, the proposed methodology is shown to work well with efficient variation Bayesian inference techniques to train Bayesian neural networks in a decentralized manner even when the local data batches are not identically distributed.
更多
查看译文
关键词
Federated learning,variational Bayes,peer-to-peer network,decentralized learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要