Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks
CoRR(2024)
Abstract
Graph unlearning has emerged as an essential tool for safeguarding user
privacy and mitigating the negative impacts of undesirable data. Meanwhile, the
advent of dynamic graph neural networks (DGNNs) marks a significant advancement
due to their superior capability in learning from dynamic graphs, which
encapsulate spatial-temporal variations in diverse real-world applications
(e.g., traffic forecasting). With the increasing prevalence of DGNNs, it
becomes imperative to investigate the implementation of dynamic graph
unlearning. However, current graph unlearning methodologies are designed for
GNNs operating on static graphs and exhibit limitations including their serving
in a pre-processing manner and impractical resource demands. Furthermore, the
adaptation of these methods to DGNNs presents non-trivial challenges, owing to
the distinctive nature of dynamic graphs. To this end, we propose an effective,
efficient, model-agnostic, and post-processing method to implement DGNN
unlearning. Specifically, we first define the unlearning requests and formulate
dynamic graph unlearning in the context of continuous-time dynamic graphs.
After conducting a role analysis on the unlearning data, the remaining data,
and the target DGNN model, we propose a method called Gradient Transformation
and a loss function to map the unlearning request to the desired parameter
update. Evaluations on six real-world datasets and state-of-the-art DGNN
backbones demonstrate its effectiveness (e.g., limited performance drop even
obvious improvement) and efficiency (e.g., at most 7.23× speed-up)
outperformance, and potential advantages in handling future unlearning requests
(e.g., at most 32.59× speed-up).
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined