Localization and Approximations for Distributed Non-convex Optimization

Journal of Optimization Theory and Applications(2024)

引用 0|浏览3
暂无评分
摘要
Distributed optimization has many applications, in communication networks, sensor networks, signal processing, machine learning, and artificial intelligence. Methods for distributed convex optimization are widely investigated, while those for non-convex objectives are not well understood. One of the first non-convex distributed optimization frameworks over an arbitrary interaction graph was proposed by Di Lorenzo and Scutari (IEEE Trans Signal Inf Process Netw 2:120–136, 2016), which iteratively applies a combination of local optimization with convex approximations and local averaging. Motivated by application problems such as the resource allocation problems in multi-cellular networks, we generalize the existing results in two ways. In the case when the decision variables are separable such that there is partial dependency in the objectives, we reduce the communication and memory complexity of the algorithm so that nodes only keep and communicate local variables instead of the whole vector of variables. In addition, we relax the assumption that the objectives’ gradients are bounded and Lipschitz by means of successive proximal approximations. The proposed algorithmic framework is shown to be more widely applicable and numerically stable.
更多
查看译文
关键词
Distributed optimization,Non-convex optimization,Localization,Proximal approximation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要