distributed Banach-Picar"/>

Distributed Banach-Picard Iteration: Application to Distributed Parameter Estimation and PCA

IEEE Transactions on Signal Processing(2023)

引用 2|浏览19
暂无评分
摘要
We recently proposed an algorithmic framework, distributed Banach-Picard iteration (DBPI), allowing a set of agents linked by a communication network to find a fixed point of a map that: (a) is the average of individual maps held by said agents; (b) is locally contractive (LC). Given such a map, DBPI yields a distributed algorithm provably inheriting the local linear convergence (LLC) of the standard Banach-Picard iteration for the centralized (average) map. Here, we instantiate DBPI in two classical problems, which amounts to proving that the conditions guaranteeing the LLC of DBPI hold. First, taking Sanger's algorithm for principal component analysis (PCA), we show that it corresponds to iterating an LC map that can be written as the average of local maps held by agents with private data subsets. Applying DBPI then recovers a previous distributed PCA algorithm, which lacked a convergence proof, thus closing that gap. In the second instantiation, we show that a variant of the expectation-maximization (EM) algorithm for parameter estimation from noisy, faulty measurements in sensor networks can be written as iterating an LC map that is the average of local maps. Consequently, the DBPI framework yields a distributed algorithm automatically inheriting the LLC guarantee of its centralized counterpart. Verifying the LC condition for EM is nontrivial (as the underlying operator depends on random samples) and a contribution in itself, possibly of independent interest. Finally, we illustrate experimentally the linear convergence of the proposed distributed EM algorithm, contrasting with the sub-linear rate of the previous version.
更多
查看译文
关键词
Distributed computation,banach-picard iteration,fixed points,distributed EM,distributed PCA,consensus
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要