Zero-Shot Machine Unlearning at Scale via Lipschitz Regularization
CoRR(2024)
摘要
To comply with AI and data regulations, the need to forget private or
copyrighted information from trained machine learning models is increasingly
important. The key challenge in unlearning is forgetting the necessary data in
a timely manner, while preserving model performance. In this work, we address
the zero-shot unlearning scenario, whereby an unlearning algorithm must be able
to remove data given only a trained model and the data to be forgotten. Under
such a definition, existing state-of-the-art methods are insufficient. Building
on the concepts of Lipschitz continuity, we present a method that induces
smoothing of the forget sample's output, with respect to perturbations of that
sample. We show this smoothing successfully results in forgetting while
preserving general model performance. We perform extensive empirical evaluation
of our method over a range of contemporary benchmarks, verifying that our
method achieves state-of-the-art performance under the strict constraints of
zero-shot unlearning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要