Provable Multi-Party Reinforcement Learning with Diverse Human Feedback
CoRR(2024)
Abstract
Reinforcement learning with human feedback (RLHF) is an emerging paradigm to
align models with human preferences. Typically, RLHF aggregates preferences
from multiple individuals who have diverse viewpoints that may conflict with
each other. Our work initiates the theoretical study of multi-party
RLHF that explicitly models the diverse preferences of multiple individuals. We
show how traditional RLHF approaches can fail since learning a single reward
function cannot capture and balance the preferences of multiple individuals. To
overcome such limitations, we incorporate meta-learning to learn multiple
preferences and adopt different social welfare functions to aggregate the
preferences across multiple parties. We focus on the offline learning setting
and establish sample complexity bounds, along with efficiency and fairness
guarantees, for optimizing diverse social welfare functions such as Nash,
Utilitarian, and Leximin welfare functions. Our results show a separation
between the sample complexities of multi-party RLHF and traditional
single-party RLHF. Furthermore, we consider a reward-free setting, where each
individual's preference is no longer consistent with a reward model, and give
pessimistic variants of the von Neumann Winner based on offline preference
data. Taken together, our work showcases the advantage of multi-party RLHF but
also highlights its more demanding statistical complexity.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined