Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial RegularizerZhihan Liu,Miao Lu,Shenao Zhang,Boyi Liu,Hongyi Guo,Yingxiang Yang,Jose Blanchet,Zhaoran WangNeurIPS 2024(2024)引用 49|浏览32关键词Alignment,Reinforcement Learning from Human FeedbackAI 理解论文溯源树样例生成溯源树,研究论文发展脉络Chat Paper正在生成论文摘要