谷歌Chrome浏览器插件
订阅小程序
在清言上使用

SIA: A sustainable inference attack framework in split learning

NEURAL NETWORKS(2024)

引用 0|浏览13
暂无评分
摘要
Split learning is a widely recognized distributed learning framework suitable for joint training scenarios with limited computing resources. However, recent research indicates that the malicious server can achieve high -quality reconstruction of the client's data through feature space hijacking attacks, leading to severe privacy leakage concerns. In this paper, we further enhance this attack to enable efficient data reconstruction while maintaining acceptable performance on the main task. Another significant advantage of our attack framework lies in its ability to fool the state-of-the-art attack detection mechanism, thus minimizing the risk of attacker exposure and making sustainable attacks possible. Moreover, we adaptively refine and adjust the attack strategy, extending the data reconstruction attack for the first time to the more challenging scenario of vertically partitioned data in split learning. In addition, we introduce three training modes for the attack framework, allowing the attacker to choose according to their requirements freely. Finally, we conduct extensive experiments on three datasets and evaluate the attack performance of attack frameworks in different scenarios, parameter settings, and defense mechanisms. The results demonstrate our attack framework's effectiveness, invisibility, and generality. Our research comprehensively highlights the potential privacy risks associated with split learning and sounds the alarm for secure applications of split learning.
更多
查看译文
关键词
Split learning,Inference attack,Feature space,Shadow model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要