A GAN-based data poisoning framework against anomaly detection in vertical federated learning
CoRR(2024)
摘要
In vertical federated learning (VFL), commercial entities collaboratively
train a model while preserving data privacy. However, a malicious participant's
poisoning attack may degrade the performance of this collaborative model. The
main challenge in achieving the poisoning attack is the absence of access to
the server-side top model, leaving the malicious participant without a clear
target model. To address this challenge, we introduce an innovative end-to-end
poisoning framework P-GAN. Specifically, the malicious participant initially
employs semi-supervised learning to train a surrogate target model.
Subsequently, this participant employs a GAN-based method to produce
adversarial perturbations to degrade the surrogate target model's performance.
Finally, the generator is obtained and tailored for VFL poisoning. Besides, we
develop an anomaly detection algorithm based on a deep auto-encoder (DAE),
offering a robust defense mechanism to VFL scenarios. Through extensive
experiments, we evaluate the efficacy of P-GAN and DAE, and further analyze the
factors that influence their performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要