Off-Policy Evaluation from Logged Human Feedback
arxiv(2024)
摘要
Learning from human feedback has been central to recent advances in
artificial intelligence and machine learning. Since the collection of human
feedback is costly, a natural question to ask is if the new feedback always
needs to collected. Or could we evaluate a new model with the human feedback on
responses of another model? This motivates us to study off-policy evaluation
from logged human feedback. We formalize the problem, propose both model-based
and model-free estimators for policy values, and show how to optimize them. We
analyze unbiasedness of our estimators and evaluate them empirically. Our
estimators can predict the absolute values of evaluated policies, rank them,
and be optimized.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要