RLHF from Heterogeneous Feedback via Personalization and Preference Aggregation
CoRR(2024)
摘要
Reinforcement learning from human feedback (RLHF) has been an effective
technique for aligning AI systems with human values, with remarkable successes
in fine-tuning large-language models recently. Most existing RLHF paradigms
make the underlying assumption that human preferences are relatively
homogeneous, and can be encoded by a single reward model. In this paper, we
focus on addressing the issues due to the inherent heterogeneity in human
preferences, as well as their potential strategic behavior in providing
feedback. Specifically, we propose two frameworks to address heterogeneous
human feedback in principled ways: personalization-based one and
aggregation-based one. For the former, we propose two approaches based on
representation learning and clustering, respectively, for learning multiple
reward models that trades off the bias (due to preference heterogeneity) and
variance (due to the use of fewer data for learning each model by
personalization). We then establish sample complexity guarantees for both
approaches. For the latter, we aim to adhere to the single-model framework, as
already deployed in the current RLHF paradigm, by carefully aggregating diverse
and truthful preferences from humans. We propose two approaches based on reward
and preference aggregation, respectively: the former utilizes both
utilitarianism and Leximin approaches to aggregate individual reward models,
with sample complexity guarantees; the latter directly aggregates the human
feedback in the form of probabilistic opinions. Under the
probabilistic-opinion-feedback model, we also develop an approach to handle
strategic human labelers who may bias and manipulate the aggregated preferences
with untruthful feedback. Based on the ideas in mechanism design, our approach
ensures truthful preference reporting, with the induced aggregation rule
maximizing social welfare functions.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要