Prototypical Reward Network for Data-Efficient RLHF
arxiv(2024)
摘要
The reward model for Reinforcement Learning from Human Feedback (RLHF) has
proven effective in fine-tuning Large Language Models (LLMs). Notably,
collecting human feedback for RLHF can be resource-intensive and lead to
scalability issues for LLMs and complex tasks. Our proposed framework Proto-RM
leverages prototypical networks to enhance reward models under limited human
feedback. By enabling stable and reliable structural learning from fewer
samples, Proto-RM significantly enhances LLMs' adaptability and accuracy in
interpreting human preferences. Extensive experiments on various datasets
demonstrate that Proto-RM significantly improves the performance of reward
models and LLMs in human feedback tasks, achieving comparable and usually
better results than traditional methods, while requiring significantly less
data. in data-limited scenarios. This research offers a promising direction for
enhancing the efficiency of reward models and optimizing the fine-tuning of
language models under restricted feedback conditions.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要