Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models
CoRR(2024)
摘要
Catastrophic forgetting emerges as a critical challenge when fine-tuning
multi-modal large language models (MLLMs), where improving performance on
unseen tasks often leads to a significant performance drop on the original
tasks. This paper presents a comprehensive analysis of catastrophic forgetting
in MLLMs and introduces a post-training adjustment method called Model Tailor.
Our method primarily preserves the pre-trained parameters while replacing a
small number (≤ 10%) of fine-tuned parameters, maintaining ∼ 99%
effectiveness on original tasks versus pre-training, and achieving ∼ 97%
on new tasks compared to standard fine-tuning. Specifically, we derive a sparse
mask to identify the "model patch", based on a fusion strategy that integrates
salience and sensitivity analysis. Subsequently, a compensation mechanism is
introduced to "decorate the patch", enhancing the model's performance on both
target and original tasks. Additionally, our method is adaptable to multi-task
scenarios. Through extensive experiments on InstructBLIP and LLaVA-1.5 in both
image captioning and visual question answering tasks, our approach demonstrates
significant task adaptability while preserving inherent pre-trained
capabilities.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要