A Personalized Privacy Preserving Mechanism for Crowdsourced Federated Learning

IEEE TRANSACTIONS ON MOBILE COMPUTING(2024)

引用 1|浏览9
暂无评分
摘要
In this paper, we focus on the privacy preserving mechanism design for crowdsourced Federated Learning (FL), where a requester can outsource its model training task to some workers via an FL platform. A potential way to preserve the privacy of workers' local data is to leverage Differential Privacy (DP) mechanisms on local models. However, most of these studies cannot allow workers to dominate their own privacy protection levels by themselves. Thus, we propose a Personalized Privacy Preserving Mechanism, called P3M, to satisfy the heterogeneous privacy needs of workers, which consists of two parts. The first part includes a personalized privacy budget determination problem. We model it as a two-stage Stackelberg game, derive the personalized privacy budget for each worker and the optimal payment for the requester, and prove that they form a unique Stackelberg equilibrium. Second, we design a dynamic perturbation scheme to perturb model parameters. Through the theoretical analysis, we prove that P3M satisfies the desired DP property, and derive the bounds of the variance of average perturbed parameters and the convergence upper bound. This demonstrates that the global model accuracy can be controllable and P3M is endowed with the satisfactory convergence performance. In addition, we extend our problem to the scenario where the total privacy budget of all workers is limited, so as to prevent some workers from setting exorbitant privacy budgets. Under the privacy constraint, we re-determine the personalized privacy budget for each worker. Finally, exhaustive simulations of P3M are conducted based on real-world datasets, and the experimental results corroborate its effectiveness and practicability.
更多
查看译文
关键词
Federated learning,mobile crowdsourcing,privacy preservation,stackelberg game
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要