谷歌浏览器插件
订阅小程序
在清言上使用

Fool Attackers by Imperceptible Noise: A Privacy-Preserving Adversarial Representation Mechanism for Collaborative Learning

Na Ruan, Jikun Chen, Tu Huang,Zekun Sun,Jie Li

IEEE Transactions on Mobile Computing(2024)

引用 0|浏览18
暂无评分
摘要
The performance of deep learning models highly depends on the amount of training data. It is common practice for today's data holders to merge their datasets and train models collaboratively, which yet poses a threat to data privacy. Different from existing methods such as secure multi-party computation (MPC) and federated learning (FL), we find representation learning has unique advantages in collaborative learning due to its low privacy budget, wide applicability to tasks and lower communication overhead. However, data representations face the threat of model inversion attacks. In this article, we formally define the collaborative learning scenario, and present ARS (for adversarial representation sharing), a collaborative learning framework wherein users share representations of data to train models, and add imperceptible adversarial noise to data representations against reconstruction or attribute extraction attacks. By theoretical analysis and evaluating ARS in different contexts, we demonstrate that our mechanism is effective against model inversion attacks, and can achieve great utility and low communication complexity while preserving data privacy. Moreover, the ARS framework has wide applicability, which can be easily extended to the vertical data partitioning scenario and utilized in different tasks.
更多
查看译文
关键词
Privacy,collaborative learning,adversarial examples,quantification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要