Goldfish: An Efficient Federated Unlearning Framework
arxiv(2024)
摘要
With recent legislation on the right to be forgotten, machine unlearning has
emerged as a crucial research area. It facilitates the removal of a user's data
from federated trained machine learning models without the necessity for
retraining from scratch. However, current machine unlearning algorithms are
confronted with challenges of efficiency and validity.To address the above
issues, we propose a new framework, named Goldfish. It comprises four modules:
basic model, loss function, optimization, and extension. To address the
challenge of low validity in existing machine unlearning algorithms, we propose
a novel loss function. It takes into account the loss arising from the
discrepancy between predictions and actual labels in the remaining dataset.
Simultaneously, it takes into consideration the bias of predicted results on
the removed dataset. Moreover, it accounts for the confidence level of
predicted results. Additionally, to enhance efficiency, we adopt knowledge
distillation technique in basic model and introduce an optimization module that
encompasses the early termination mechanism guided by empirical risk and the
data partition mechanism. Furthermore, to bolster the robustness of the
aggregated model, we propose an extension module that incorporates a mechanism
using adaptive distillation temperature to address the heterogeneity of user
local data and a mechanism using adaptive weight to handle the variety in the
quality of uploaded models. Finally, we conduct comprehensive experiments to
illustrate the effectiveness of proposed approach.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要