HFT: Half Fine-Tuning for Large Language Models
CoRR(2024)
摘要
Large language models (LLMs) with one or more fine-tuning phases have become
a necessary step to unlock various capabilities, enabling LLMs to follow
natural language instructions or align with human preferences. However, it
carries the risk of catastrophic forgetting during sequential training, the
parametric knowledge or the ability learned in previous stages may be
overwhelmed by incoming training data. In this paper, we find that by regularly
resetting partial parameters, LLMs can restore some of the original knowledge.
Inspired by this, we introduce Half Fine-Tuning (HFT) for LLMs, as a substitute
for full fine-tuning (FFT), to mitigate the forgetting issues, where half of
the parameters are selected to learn new tasks while the other half are frozen
to remain previous knowledge. We provide a feasibility analysis from the
perspective of optimization and interpret the parameter selection operation as
a regularization term. Without changing the model architecture, HFT could be
seamlessly integrated into existing fine-tuning frameworks. Extensive
experiments and analysis on supervised fine-tuning, direct preference
optimization, and continual learning consistently demonstrate the
effectiveness, robustness, and efficiency of HFT. Compared with FFT, HFT not
only significantly alleviates the forgetting problem, but also achieves the
best performance in a series of downstream benchmarks, with an approximately
30
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要