ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 22|浏览96
暂无评分
摘要
This work aims to tackle Model Inversion (MI) attack on Split Federated Learning (SFL). SFL is a recent distributed training scheme where multiple clients send intermediate activations (i. e., feature map), instead of raw data, to a central server. While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server. Existing works on protecting SFL only consider inference and do not handle attacks during training. So we propose ResSFL, a Split Federated Learning Framework that is designed to be MI-resistant during training. It is based on deriving a resistant feature extractor via attacker-aware training, and using this extractor to initialize the client-side model prior to standard SFL training. Such a method helps in reducing the computational complexity due to use of strong inversion model in client-side adversarial training as well as vulnerability of attacks launched in early training epochs. On CIFAR-100 dataset, our proposed framework successfully mitigates MI attack on a VGG-11 model with a high reconstruction Mean-Square-Error of 0.050 compared to 0.005 obtained by the baseline system. The frame-work achieves 67.5% accuracy (only 1 % accuracy drop) with very low computation overhead. Code is released at: https://github.com/zlijingtao/ResSFL.
更多
查看译文
关键词
Privacy and federated learning, Efficient learning and inferences, Transfer/low-shot/long-tail learning, Transparency,fairness,accountability,privacy and ethics in vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要