Exact and Efficient Unlearning for Large Language Model-based Recommendation
arxiv(2024)
摘要
The evolving paradigm of Large Language Model-based Recom- mendation (LLMRec)
customizes Large Language Models (LLMs) through parameter-efficient fine-tuning
(PEFT) using recommenda- tion data. The inclusion of user data in LLMs raises
privacy concerns. To protect users, the unlearning process in LLMRec,
specifically removing unusable data (e.g., historical behaviors) from
established LLMRec models, becomes crucial. However, existing unlearning
methods are insufficient for the unique characteristics of LLM- Rec, mainly due
to high computational costs or incomplete data erasure. In this study, we
introduce the Adapter Partition and Ag- gregation (APA) framework for exact and
efficient unlearning while maintaining recommendation performance. APA achieves
this by establishing distinct adapters for partitioned training data shards and
retraining only the adapters impacted by unusable data for un- learning. To
preserve recommendation performance and mitigate considerable inference costs,
APA employs parameter-level adapter aggregation with sample-adaptive attention
for individual testing samples. Extensive experiments substantiate the
effectiveness and efficiency of our proposed framework
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要