RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback
CoRR(2024)
摘要
Reward engineering has long been a challenge in Reinforcement Learning (RL)
research, as it often requires extensive human effort and iterative processes
of trial-and-error to design effective reward functions. In this paper, we
propose RL-VLM-F, a method that automatically generates reward functions for
agents to learn new tasks, using only a text description of the task goal and
the agent's visual observations, by leveraging feedbacks from vision language
foundation models (VLMs). The key to our approach is to query these models to
give preferences over pairs of the agent's image observations based on the text
description of the task goal, and then learn a reward function from the
preference labels, rather than directly prompting these models to output a raw
reward score, which can be noisy and inconsistent. We demonstrate that RL-VLM-F
successfully produces effective rewards and policies across various domains -
including classic control, as well as manipulation of rigid, articulated, and
deformable objects - without the need for human supervision, outperforming
prior methods that use large pretrained models for reward generation under the
same assumptions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要