See, Say, and Segment: Teaching LMMs to Overcome False Premises
CoRR(2023)
摘要
Current open-source Large Multimodal Models (LMMs) excel at tasks such as
open-vocabulary language grounding and segmentation but can suffer under false
premises when queries imply the existence of something that is not actually
present in the image. We observe that existing methods that fine-tune an LMM to
segment images significantly degrade their ability to reliably determine
("see") if an object is present and to interact naturally with humans ("say"),
a form of catastrophic forgetting. In this work, we propose a cascading and
joint training approach for LMMs to solve this task, avoiding catastrophic
forgetting of previous skills. Our resulting model can "see" by detecting
whether objects are present in an image, "say" by telling the user if they are
not, proposing alternative queries or correcting semantic errors in the query,
and finally "segment" by outputting the mask of the desired objects if they
exist. Additionally, we introduce a novel False Premise Correction benchmark
dataset, an extension of existing RefCOCO(+/g) referring segmentation datasets
(which we call FP-RefCOCO(+/g)). The results show that our method not only
detects false premises up to 55% better than existing approaches, but under
false premise conditions produces relative cIOU improvements of more than 31%
over baselines, and produces natural language feedback judged helpful up to 67%
of the time.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要