mDPO: Conditional Preference Optimization for Multimodal Large Language Models
arxiv(2024)
摘要
Direct preference optimization (DPO) has shown to be an effective method for
large language model (LLM) alignment. Recent works have attempted to apply DPO
to multimodal scenarios but have found it challenging to achieve consistent
improvement. Through a comparative experiment, we identify the unconditional
preference problem in multimodal preference optimization, where the model
overlooks the image condition. To address this problem, we propose mDPO, a
multimodal DPO objective that prevents the over-prioritization of language-only
preferences by also optimizing image preference. Moreover, we introduce a
reward anchor that forces the reward to be positive for chosen responses,
thereby avoiding the decrease in their likelihood – an intrinsic problem of
relative preference optimization. Experiments on two multimodal LLMs of
different sizes and three widely used benchmarks demonstrate that mDPO
effectively addresses the unconditional preference problem in multimodal
preference optimization and significantly improves model performance,
particularly in reducing hallucination.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要