Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training
CoRR(2024)
摘要
Highlighting particularly relevant regions of an image can improve the
performance of vision-language models (VLMs) on various vision-language (VL)
tasks by guiding the model to attend more closely to these regions of interest.
For example, VLMs can be given a "visual prompt", where visual markers such as
bounding boxes delineate key image regions. However, current VLMs that can
incorporate visual guidance are either proprietary and expensive or require
costly training on curated data that includes visual prompts. We introduce
Contrastive Region Guidance (CRG), a training-free guidance method that enables
open-source VLMs to respond to visual prompts. CRG contrasts model outputs
produced with and without visual prompts, factoring out biases revealed by the
model when answering without the information required to produce a correct
answer (i.e., the model's prior). CRG achieves substantial improvements in a
wide variety of VL tasks: When region annotations are provided, CRG increases
absolute accuracy by up to 11.1
region-based tasks such as recognition, math, and object relationship
reasoning. We also show CRG's applicability to spatial reasoning, with 10
improvement on What'sUp, as well as to compositional generalization –
improving accuracy by 11.5
– and to image-text alignment for generated images, where we improve by up to
8.4 AUROC and 6.8 F1 points on SeeTRUE. When reference regions are absent, CRG
allows us to re-rank proposed regions in referring expression comprehension and
phrase grounding benchmarks like RefCOCO/+/g and Flickr30K Entities, with an
average gain of 3.2
strategies for CRG, quantifies CRG's probability shift, and evaluates the role
of region guidance strength, empirically validating CRG's design choices.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要