Efficient Differentially Private Tensor Factorization in the Parallel and Distributed Computing Paradigm.

Parallel and Distributed Processing with Applications(2023)

引用 0|浏览4
暂无评分
摘要
Tensor factorization plays a fundamental role in multiple areas of AI research. Nevertheless, it encounters significant challenges related to privacy breaches and operational efficiency. In this study, we propose a novel approach that addresses both of these issues simultaneously by integrating differential privacy with parallel and distributed computing. To accommodate diverse scenarios, we introduce two models: DPTF-SVRG and ADMM-DPTF, each leveraging specific techniques. DPTF-SVRG is designed for single-GPU environments and utilizes a unique strategy to reduce gradient variance, enabling faster convergence compared to SGD. Moreover, it achieves parallelism on the GPU through a lock-free asynchronous approach. On the other hand, ADMM-DPTF utilizes distributed ADMM to parallelize DPTF-SVRG, enabling multi-GPU parallelism. Experimental results demonstrate that our algorithms outperform existing benchmarks while maintaining differential privacy.
更多
查看译文
关键词
Tensor Factorization,Differential Privacy,Parallel and Distributed Computing,Machine Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要