When are Lemons Purple? The Concept Association Bias of Vision-Language Models
arxiv(2022)
摘要
Large-scale vision-language models such as CLIP have shown impressive
performance on zero-shot image classification and image-to-text retrieval.
However, such performance does not realize in tasks that require a
finer-grained correspondence between vision and language, such as Visual
Question Answering (VQA). As a potential cause of the difficulty of applying
these models to VQA and similar tasks, we report an interesting phenomenon of
vision-language models, which we call the Concept Association Bias (CAB). We
find that models with CAB tend to treat input as a bag of concepts and attempt
to fill in the other missing concept crossmodally, leading to an unexpected
zero-shot prediction. We demonstrate CAB by showing that CLIP's zero-shot
classification performance greatly suffers when there is a strong concept
association between an object (e.g. eggplant) and an attribute (e.g. color
purple). We also show that the strength of CAB predicts the performance on VQA.
We observe that CAB is prevalent in vision-language models trained with
contrastive losses, even when autoregressive losses are jointly employed.
However, a model that solely relies on autoregressive loss seems to exhibit
minimal or no signs of CAB.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要