Differential Privacy in Distributed Optimization with Gradient Tracking

IEEE Transactions on Automatic Control(2024)

引用 0|浏览4
暂无评分
摘要
In recent years, there has been a growing interest in distributed optimization, which collaboratively attains an optimum by exchanging information with neighbours. Among the various distributed algorithms available, optimization with gradient tracking is particularly notable for its superior convergence results, especially in the context of directed graphs. However, privacy concerns arise when gradient information is transmitted directly which would induce more information leakage. Surprisingly, the literature has not adequately addressed the associated privacy issues. In response to the gap, our paper proposes a privacy-preserving distributed optimization algorithm with gradient tracking by adding noises to transmitted messages, namely, the decision variables and the estimate of the aggregated gradient. We prove two dilemmas for this kind of algorithm. In the first dilemma, we reveal that this distributed optimization algorithm with gradient tracking cannot achieve $ \epsilon$ -differential privacy (DP) and exact convergence simultaneously. Building on this, we subsequently highlight that the algorithm fails to achieve $ \epsilon$ -DP when employing non-summable stepsizes in the presence of Laplace noises. It is crucial to emphasize that these findings hold true regardless of the size of the privacy metric  $ \epsilon$ . After that, we rigorously analyse the convergence performance and privacy level given summable stepsize sequences under Laplace distribution since it is only with summable stepsizes that is meaningful for us to study. We derive sufficient conditions that allow for the simultaneous stochastically bounded accuracy and $ \epsilon$ -DP. Recognizing that several options can meet these conditions, we further derive an upper bound of the mean error's variance and specify the mathematical expression of  $ \epsilon$ under such conditions. Numerical simulations are provided to demonstrate the effectiveness of our proposed algorithm.
更多
查看译文
关键词
Gradient tracking,distributed optimization,differential privacy,directed graph
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要