Configurable Safety Tuning of Language Models with Synthetic Preference Data
CoRR(2024)
Abstract
State-of-the-art language model fine-tuning techniques, such as Direct
Preference Optimization (DPO), restrict user control by hard-coding predefined
behaviors into the model. To address this, we propose a novel method,
Configurable Safety Tuning (CST), that augments DPO using synthetic preference
data to facilitate flexible safety configuration of LLMs at inference time. CST
overcomes the constraints of vanilla DPO by introducing a system prompt
specifying safety configurations, enabling LLM deployers to disable/enable
safety preferences based on their need, just changing the system prompt. Our
experimental evaluations indicate that CST successfully manages different
safety configurations and retains the original functionality of LLMs, showing
it is a robust method for configurable deployment. Data and models available at
https://github.com/vicgalle/configurable-safety-tuning
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined