Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models
arxiv(2024)
摘要
Underlying data distributions of natural language, programming code, and
mathematical symbols vary vastly, presenting a complex challenge for large
language models (LLMs) that strive to achieve high performance across all three
domains simultaneously. Achieving a very high level of proficiency for an LLM
within a specific domain often requires extensive training with relevant
corpora, which is typically accompanied by a sacrifice in performance in other
domains. In this paper, we propose to fuse models that are already
highly-specialized directly. The proposed fusing framework, UltraFuser,
consists of three distinct specialists that are already sufficiently trained on
language, coding, and mathematics. A token-level gating mechanism is introduced
to blend the specialists' outputs. A two-stage training strategy accompanied by
balanced sampling is designed to ensure stability. To effectively train the
fused model, we further construct a high-quality supervised instruction tuning
dataset, UltraChat 2, which includes text, code, and mathematical content. This
dataset comprises approximately 300,000 instructions and covers a wide range of
topics in each domain. Experiments show that our model could simultaneously
achieve mastery of the three crucial domains.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要