A Dual Reinforcement Learning Framework for Weakly Supervised Phrase Grounding

IEEE TRANSACTIONS ON MULTIMEDIA(2024)

引用 0|浏览2
暂无评分
摘要
Weakly-supervised phrase grounding aims to localize a specific region in an image that corresponds to the given textual phrase, where the mapping between noun phrases and image regions is not available in the training stage. Previous methods typically exploit an additional proxy task (e.g., phrase reconstruction or image-phrase alignment) to provide supervision for training, since the lack of region-level annotations in the weakly-supervised setting. However, there exists a significant gap in optimization objectives between the proxy tasks and the target grounding task, which may result in low-efficient optimization for the target model. Therefore, in this paper, we propose a novel dual reinforcement learning framework to directly optimize the phrase grounding model. Specifically, we consider the duality of phrase grounding and phrase generation tasks. These two tasks form a closed loop that can provide quality feedback signals to measure the performance of each other. In this way, we can measure the correctness of the localized regions and thus be able to optimize the grounding model directly. We design two reward functions to quantify the feedback signals and train the models via reinforcement learning. In addition, to relieve the training difficulty of our framework, we present a heuristic algorithm to generate pseudo region-phrase pairs to warm-start our models. We perform experiments on two popular phrase grounding datasets: ReferItGame and Flickr30K Entities, and the results demonstrate that our method outperforms the previous methods by a large margin.
更多
查看译文
关键词
Grounding,Task analysis,Training,Reinforcement learning,Optimization,Image reconstruction,Proposals,Weakly supervised phrase grounding,visual grounding,dual learning,reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要