Vision-Language Models under Cultural and Inclusive Considerations
arxiv(2024)
Abstract
Large vision-language models (VLMs) can assist visually impaired people by
describing images from their daily lives. Current evaluation datasets may not
reflect diverse cultural user backgrounds or the situational context of this
use case. To address this problem, we create a survey to determine caption
preferences and propose a culture-centric evaluation benchmark by filtering
VizWiz, an existing dataset with images taken by people who are blind. We then
evaluate several VLMs, investigating their reliability as visual assistants in
a culturally diverse setting. While our results for state-of-the-art models are
promising, we identify challenges such as hallucination and misalignment of
automatic evaluation metrics with human judgment. We make our survey, data,
code, and model outputs publicly available.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined