LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable Objectives
arxiv(2024)
摘要
The widespread adoption of synthetic data raises new questions about how
models generating the data can influence other large language models (LLMs) via
distilled data. To start, our work exhaustively characterizes the impact of
passive inheritance of model properties by systematically studying the
consequences of synthetic data integration. We provide one of the most
comprehensive studies to-date of how the source of synthetic data shapes
models' internal biases, calibration and generations' textual attributes and
preferences. We find that models are surprisingly sensitive towards certain
attributes even when the synthetic data prompts appear "neutral". which invites
the question whether this sensitivity can be exploited for good.
Our findings invite the question can we explicitly steer the models towards
the properties we want at test time by exploiting the data generation process?
This would have historically been considered infeasible due to the cost of
collecting data with a specific characteristic or objective in mind. However,
improvement in the quality of synthetic data, as well as a shift towards
general-purpose models designed to follow a diverse way of instructions, means
this question is timely. We propose active inheritance as a term to describe
intentionally constraining synthetic data according to a non-differentiable
objective. We demonstrate how active inheritance can steer the generation
profiles of models towards desirable non-differentiable attributes, e.g. high
lexical diversity or low toxicity.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要