Distributed Adaptive Learning Under Communication Constraints

IEEE OPEN JOURNAL OF SIGNAL PROCESSING(2024)

引用 0|浏览0
暂无评分
摘要
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data. To this end, the agents implement a distributed cooperative strategy where each agent is allowed to perform local exchange of information with its neighbors. In order to cope with communication constraints, the exchanged information must be compressed to reduce the communication load. We propose a distributed diffusion strategy nicknamed as ACTC (Adapt-Compress-Then-Combine), which implements the following three operations: adaptation, where each agent performs an individual stochastic-gradient update; compression, which leverages a recently introduced class of stochastic compression operators; and combination, where each agent combines the compressed updates received from its neighbors. The main elements of novelty of this work are as follows: $i)$ adaptive strategies, where constant (as opposed to diminishing) step-sizes are critical to infuse the agents with the ability of responding in real time to nonstationary variations in the observed model; $ii)$ directed, i.e., non-symmetric combination policies, which allow us to enhance the role played by the network topology in the learning performance; $iii)$ global strong convexity, a condition under which the individual agents might feature even non-convex cost functions. Under this demanding setting, we establish that the iterates of the ACTC strategy fluctuate around the exact global optimizer with a mean-square-deviation on the order of the step-size, achieving remarkable savings of communication resources. Comparison against up-to-date learning strategies with compressed data highlights the benefits of the proposed solution.
更多
查看译文
关键词
Distributed optimization,adaptation and learning,diffusion strategies,stochastic quantizers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要