Federated learning for minimizing nonsmooth convex loss functions.

Math. Found. Comput.(2023)

引用 2|浏览0
暂无评分
摘要
. Federated learning possesses a distributed learning framework for protecting privacy where local clients need to collaboratively train a shared model via a central server. Many existing methods in the literature of federated learning are analyzed under some smoothness conditions of loss functions and need access to gradient information to perform local optimization algorithms such as stochastic gradient descent or dual average. Some methods even need a strong convexity of the loss function. However, in many situations of real world applications such as readability of texts, first-order gradient information is difficult to obtain, and the strong smoothness and strong convexity of loss functions are not satisfied. We consider methods to overcome such situations. This paper aims at providing an understanding of federated learning in the situation that the loss functions are nonsmooth and gradient information is unavailable, also a strong convexity condition is not needed. Based on Nesterov's zeroth-order (gradient-free) techniques, we propose a zeroth-order stochastic federated learning method. Constant and decreasing step size strategies are considered. Moreover, a new type of approximating sequence is proposed in federated learning for strictly decreasing step sizes. Expected error bounds for the proposed approximating sequence and learning rates of the proposed method are derived under some selection rules of the step sizes and smoothing parameters.
更多
查看译文
关键词
Key words and phrases, Federated learning, zeroth-order gradient, Gaussian approximation, learning rate, approximating sequence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要