Multi-turn Reinforcement Learning from Preference Human Feedback
CoRR(2024)
Abstract
Reinforcement Learning from Human Feedback (RLHF) has become the standard
approach for aligning Large Language Models (LLMs) with human preferences,
allowing LLMs to demonstrate remarkable abilities in various tasks. Existing
methods work by emulating the preferences at the single decision (turn) level,
limiting their capabilities in settings that require planning or multi-turn
interactions to achieve a long-term goal. In this paper, we address this issue
by developing novel methods for Reinforcement Learning (RL) from preference
feedback between two full multi-turn conversations. In the tabular setting, we
present a novel mirror-descent-based policy optimization algorithm for the
general multi-turn preference-based RL problem, and prove its convergence to
Nash equilibrium. To evaluate performance, we create a new environment,
Education Dialogue, where a teacher agent guides a student in learning a random
topic, and show that a deep RL variant of our algorithm outperforms RLHF
baselines. Finally, we show that in an environment with explicit rewards, our
algorithm recovers the same performance as a reward-based RL baseline, despite
relying solely on a weaker preference signal.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined