FedME2: Memory Evaluation & Erase Promoting Federated Unlearning in DTMN

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS(2023)

Cited 0|Views6
No score
Abstract
Digital Twins (DTs) can generate digital replicas for mobile networks (MNs) that accurately reflect the state of MN. Machine learning (ML) models trained in DT for MN (DTMN) virtual environments can be more robustly implemented in MN. This can avoid the training difficulties and runtime errors caused by MN instability and multiple failures. However, when using data from various devices in the MN system, DTs must prioritize data privacy. Federated learning (FL) enables the construction of models without data leaving devices to protect DTMN data privacy. Nevertheless, FL's privacy protection needs further improvement for it only guarantees device-level data ownership but ignores that models may retain private information from data. Therefore, this paper focuses on data forgetting in privacy protection, and proposes a novel FL-based unlearning framework (FedME2), which contains MEval and MErase modules. Guided by memory evaluation information from MEval and employing MErase's multi-loss training approach, FedME2 gets accurate data forgetting in DTMN. In four DTMN virtual environments, FedME2 achieves an average data forgetting rate of approximately 75% for global models under FL and kept the influence on global models' accuracy below 4%. FedME2 has better data forgetting and improves DTMN data privacy protection while guaranteeing model accuracy.
More
Translated text
Key words
Data models,Manganese,Training,Computational modeling,Data privacy,Servers,Task analysis,Digital twins,federated learning,machine unlearning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined