Vulnerabilities of Data Protection in Vertical Federated Learning Training and Countermeasures

IEEE Transactions on Information Forensics and Security(2024)

引用 0|浏览7
暂无评分
摘要
Vertical federated learning (VFL) is an increasingly popular, yet understudied, collaborative learning technique. In VFL, features and labels are distributed among different participants allowing for various innovative applications in business domains, e.g., online marketing. When deploying VFL, training data (labels and features) from each participant ought to be protected; however, very few studies have investigated the vulnerability of data protection in the VFL training stage. In this paper, we propose a posterior-difference-based data attack, VFLRecon , reconstructing labels and features to examine this problem. Our experiments show that standard VFL is highly vulnerable to serious privacy threats, with reconstruction achieving up to 92% label accuracy and 0.05 feature MSE, compared to our baseline with 55% label accuracy and 0.19 feature MSE. Even worse, this privacy risk remains during standard operations (e.g., encrypted aggregation) that appear to be safe. We also systematically analyze data leakage risks in the VFL training stage across diverse data modalities (i.e., tabular data and images), different training frameworks (i.e., with or without encryption techniques), and a wide range of training hyperparameters. To mitigate this risk, we design a novel defense mechanism, VFLDefender , dedicated to obfuscating the correlation between bottom model changes and labels (features) during training. The experimental results demonstrate that VFLDefender prevents reconstruction attacks during standard encryption operations (around 17% more effective than standard encryption operations).
更多
查看译文
关键词
Privacy-preserving machine learning,vertical federated learning,privacy leakage,data safety,privacy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要