Strategic Gradient Transmission with Targeted Privacy-Awareness in Model Training: A Stackelberg Game Analysis

Hezhe Sun, Yufei Wang,Huiwen Yang, Kaixuan Huo,Yuzhe Li

IEEE Transactions on Artificial Intelligence(2024)

引用 0|浏览0
暂无评分
摘要
Privacy-aware machine learning paradigms have sparked widespread concern due to their ability to safeguard the local privacy of data owners, preventing the leakage of private information to untrustworthy platforms or malicious third parties. This paper focuses on characterizing the interactions between the learner and the data owner within this privacy-aware training process. Here, the data owner hesitates to transmit the original gradient to the learner due to potential cybersecurity issues, such as gradient leakage and membership inference. To address this concern, we propose a Stackelberg game framework that models the training process. In this framework, the data owner’s objective is not to maximize the discrepancy between the learner’s obtained gradient and the true gradient but rather to ensure that the learner obtains a gradient closely resembling one deliberately designed by the data owner, while the learner’s objective is to recover the true gradient as accurately as possible. We derive the optimal encoder and decoder using mismatched cost functions and characterize the equilibrium for specific cases, balancing model accuracy and local privacy. Numerical examples illustrate the main results, and we conclude with expanding discussions to suggest future investigations into reliable countermeasure designs.
更多
查看译文
关键词
Machine learning,Privacy-aware training,Strategic transmission,Game theory,Stackelberg game,Stochastic gradient descent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要