Decentralized online convex optimization with compressed communications

Automatica(2023)

引用 2|浏览9
暂无评分
摘要
Due to the iterative information exchange between agents, decentralized multi-agent optimization algorithms often incur large communication overhead, which is not affordable in many practical systems with scarce resources. To address this communication bottleneck, we compress the exchanged information between neighboring agents to solve decentralized online convex optimization problem, where each agent has a time-varying local loss function and the goal is to minimize the accumulated global loss by choosing actions sequentially. We develop a decentralized online gradient descent algorithm based on compressed communications, where the compression operators can reduce the amount of data to be sent significantly. Error-compensation technique is utilized to mitigate the error accumulation caused by the communication compression. We further analyze the performance of the proposed algorithm. It is shown that the regret is bounded from above by O(T) (T is the time horizon), which is the same as (in order sense) the regret bound for vanilla decentralized online gradient descent with perfect communication. Thus, the proposed algorithm is capable of reducing the communication overhead remarkably without much degradation of the optimization performance. This is further corroborated by numerical experiments, where compressed communication reduces the number of transmitted bits by an order of magnitude without compromising the optimization performance.
更多
查看译文
关键词
Decentralized optimization, Online convex optimization, Communication efficiency, Compressed communications, Error compensation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要