Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models
CoRR(2024)
摘要
Memory Editing (ME) has emerged as an efficient method to modify erroneous
facts or inject new facts into Large Language Models (LLMs). Two mainstream ME
methods exist: parameter-modifying ME and parameter-preserving ME (integrating
extra modules while preserving original parameters). Regrettably, previous
studies on ME evaluation have two critical limitations: (i) evaluating LLMs
with single edit only, neglecting the need for continuous editing, and (ii)
evaluations focusing solely on basic factual triples, overlooking broader LLM
capabilities like logical reasoning and reading understanding. This study
addresses these limitations with contributions threefold: (i) We explore how ME
affects a wide range of fundamental capabilities of LLMs under sequential
editing. Experimental results reveal an intriguing phenomenon: Most
parameter-modifying ME consistently degrade performance across all tasks after
a few sequential edits. In contrast, parameter-preserving ME effectively
maintains LLMs' fundamental capabilities but struggles to accurately recall
edited knowledge presented in a different format. (ii) We extend our evaluation
to different editing settings, such as layers to edit, model size, instruction
tuning, etc. Experimental findings indicate several strategies that can
potentially mitigate the adverse effects of ME. (iii) We further explain why
parameter-modifying ME damages LLMs from three dimensions: parameter changes
after editing, language modeling capability, and the in-context learning
capability. Our in-depth study advocates more careful use of ME in real-world
scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要