SMART: Submodular Data Mixture Strategy for Instruction Tuning
arxiv(2024)
摘要
Instruction Tuning involves finetuning a language model on a collection of
instruction-formatted datasets in order to enhance the generalizability of the
model to unseen tasks. Studies have shown the importance of balancing different
task proportions during finetuning, but finding the right balance remains
challenging. Unfortunately, there's currently no systematic method beyond
manual tuning or relying on practitioners' intuition. In this paper, we
introduce SMART (Submodular data Mixture strAtegy for instRuction Tuning) - a
novel data mixture strategy which makes use of a submodular function to assign
importance scores to tasks which are then used to determine the mixture
weights. Given a fine-tuning budget, SMART redistributes the budget among tasks
and selects non-redundant samples from each task. Experimental results
demonstrate that SMART significantly outperforms traditional methods such as
examples proportional mixing and equal mixing. Furthermore, SMART facilitates
the creation of data mixtures based on a few representative subsets of tasks
alone and through task pruning analysis, we reveal that in a limited budget
setting, allocating budget among a subset of representative tasks yields
superior performance compared to distributing the budget among all tasks. The
code for reproducing our results is open-sourced at
https://github.com/kowndinya-renduchintala/SMART.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要