Privacy-Preserving Incremental Admm For Decentralized Consensus Optimization

IEEE TRANSACTIONS ON SIGNAL PROCESSING(2020)

Cited 18|Views47
No score
Abstract
The alternating direction method of multipliers (ADMM) has been recently recognized as a promising optimizer for large-scale machine learning models. However, there are very few results studying ADMM from the aspect of communication costs, especially jointly with privacy preservation, which are critical for distributed learning. We investigate the communication efficiency and privacy-preservation of ADMM by solving the consensus optimization problem over decentralized networks. Since walk algorithms can reduce communication load, we first propose incremental ADMM (I-ADMM) based on the walk algorithm, the updating order of which follows a Hamiltonian cycle instead. However, I-ADMM cannot guarantee the privacy for agents against external eavesdroppers even if the randomized initialization is applied. To protect privacy for agents, we then propose two privacy-preserving incremental ADMM algorithms, i.e., PI-ADMM1 and PI-ADMM2, where perturbation over step sizes and primal variables is adopted, respectively. Through theoretical analyses, we prove the convergence and privacy preservation for PI-ADMM1, which are further supported by numerical experiments. Besides, simulations demonstrate that the proposed PI-ADMM1 and PI-ADMM2 algorithms are communication efficient compared with state-of-the-art methods.
More
Translated text
Key words
Privacy, Signal processing algorithms, Convex functions, Convergence, Optimization, Perturbation methods, Machine learning, Decentralized optimization, alternating direction method of multipliers (ADMM), privacy preservation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined