UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large Models
arxiv(2023)
Abstract
Teaching Visual Question Answering (VQA) models to refrain from answering
unanswerable questions is necessary for building a trustworthy AI system.
Existing studies, though have explored various aspects of VQA but somewhat
ignored this particular attribute. This paper aims to bridge the research gap
by contributing a comprehensive dataset, called UNK-VQA. The dataset is
specifically designed to address the challenge of questions that models do not
know. To this end, we first augment the existing data via deliberate
perturbations on either the image or question. In specific, we carefully ensure
that the question-image semantics remain close to the original unperturbed
distribution. By this means, the identification of unanswerable questions
becomes challenging, setting our dataset apart from others that involve mere
image replacement. We then extensively evaluate the zero- and few-shot
performance of several emerging multi-modal large models and discover their
significant limitations when applied to our dataset. Additionally, we also
propose a straightforward method to tackle these unanswerable questions. This
dataset, we believe, will serve as a valuable benchmark for enhancing the
abstention capability of VQA models, thereby leading to increased
trustworthiness of AI systems. We have made the dataset
(https://github.com/guoyang9/UNK-VQA) available to facilitate further
exploration in this area.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined