A Generalized Acquisition Function for Preference-based Reward Learning
arxiv(2024)
摘要
Preference-based reward learning is a popular technique for teaching robots
and autonomous systems how a human user wants them to perform a task. Previous
works have shown that actively synthesizing preference queries to maximize
information gain about the reward function parameters improves data efficiency.
The information gain criterion focuses on precisely identifying all parameters
of the reward function. This can potentially be wasteful as many parameters may
result in the same reward, and many rewards may result in the same behavior in
the downstream tasks. Instead, we show that it is possible to optimize for
learning the reward function up to a behavioral equivalence class, such as
inducing the same ranking over behaviors, distribution over choices, or other
related definitions of what makes two rewards similar. We introduce a tractable
framework that can capture such definitions of similarity. Our experiments in a
synthetic environment, an assistive robotics environment with domain transfer,
and a natural language processing problem with real datasets demonstrate the
superior performance of our querying method over the state-of-the-art
information gain method.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要