MetaRM: Shifted Distributions Alignment via Meta-Learning
arxiv(2024)
摘要
The success of Reinforcement Learning from Human Feedback (RLHF) in language
model alignment is critically dependent on the capability of the reward model
(RM). However, as the training process progresses, the output distribution of
the policy model shifts, leading to the RM's reduced ability to distinguish
between responses. This issue is further compounded when the RM, trained on a
specific data distribution, struggles to generalize to examples outside of that
distribution. These two issues can be united as a challenge posed by the
shifted distribution of the environment. To surmount this challenge, we
introduce MetaRM, a method leveraging meta-learning to align the RM with the
shifted environment distribution. MetaRM is designed to train the RM by
minimizing data loss, particularly for data that can improve the
differentiation ability to examples of the shifted target distribution.
Extensive experiments demonstrate that MetaRM significantly improves the RM's
distinguishing ability in iterative RLHF optimization, and also provides the
capacity to identify subtle differences in out-of-distribution samples.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要