Model Editing by Pure Fine-Tuning
CoRR(2024)
摘要
Fine-tuning is dismissed as not effective for model editing due to its poor
performance compared to more specialized methods. However, fine-tuning is
simple, agnostic to the architectural details of the model being edited, and
able to leverage ongoing advances in standard training methods (e.g., PEFT),
making it an appealing choice for a model editor. In this work, we show that
pure fine-tuning can be a viable approach to model editing. We propose a slight
modification of naive fine-tuning with two key ingredients. First, we optimize
the conditional likelihood rather than the full likelihood. Second, we augment
the data with random paraphrases and facts to encourage generalization and
locality. Our experiments on ZsRE and CounterFact show that this simple
modification allows fine-tuning to often match or outperform specialized
editors in the edit score.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要