Privacy-Preserving Federated Learning via Disentanglement

PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023(2023)

引用 0|浏览1
暂无评分
摘要
The trade-off between privacy and accuracy presents a challenge for current federated learning (FL) frameworks, hindering their progress from theory to application. The main issues with existing FL frameworks stem from a lack of interpretability and targeted privacy protections. To cope with these, we proposed Disentangled Federated Learning for Privacy (DFLP) which employes disentanglement, one of interpretability techniques, in private FL frameworks. Since sensitive properties are client-specific in nature, our main idea is to turn this feature into a tool that strikes the balance between data privacy and FL model performance, enabling the sensitive attributes to be private. DFLP disentangles the client-specific and class-invariant attributes to mask the sensitive attributes precisely. To our knowledge, this is the first work that successfully integrates disentanglement and the nature of sensitive attributes to achieve privacy protection while ensuring high FL model performance. Extensive experiments validate that disentanglement is an effective method for accuracy-aware privacy protection in FL frameworks.
更多
查看译文
关键词
Federated learning,disentanglement,interpretability,privacy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要