Federated Unlearning With Momentum Degradation.

IEEE Internet Things J.(2024)

引用 0|浏览3
暂无评分
摘要
Data privacy is becoming increasingly important as data becomes more valuable, as evidenced by the enactment of right to be forgotten laws and regulations. However, in a federated learning system, simply deleting data from the database when a user requests data revocation is not sufficient, as the training data is already implicitly contained in the parameter distribution of the models trained with it. Furthermore, the global model in the federated learning system is vulnerable to data poisoning attacks by malicious nodes. Exploring a reliable data poisoning reversal method can effectively counter such attacks. In this paper, we analyze the necessity of decoupling the processes of unlearning and training and propose a trainingagnostic and efficient method that can effectively perform two types of unlearning tasks: client revocation and category removal. Specifically, we decompose the unlearning process into two steps: knowledge erasure and memory guidance. We first propose a novel knowledge erasure strategy called momentum degradation (MoDe) which realizes the erasure of implicit knowledge in the model and ensures that the model can move smoothly to the early state of the retrained model. To mitigate the performance degradation caused by the first step, the memory guidance strategy implements guided fine-tuning of the model on different data points, which can effectively restore the discriminability of the model on the remaining data points. Extensive experiments demonstrate that our method outperforms the existing taskspecific algorithms and matches the performance of retraining, accelerating the execution time by 5 -20 times compared to retraining on different datasets.
更多
查看译文
关键词
Federated unlearning,data privacy,momentum degradation,memory guidance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要