VRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image Understanding
arxiv(2024)
Abstract
We introduce a new benchmark designed to advance the development of
general-purpose, large-scale vision-language models for remote sensing images.
Although several vision-language datasets in remote sensing have been proposed
to pursue this goal, existing datasets are typically tailored to single tasks,
lack detailed object information, or suffer from inadequate quality control.
Exploring these improvement opportunities, we present a Versatile
vision-language Benchmark for Remote Sensing image understanding, termed
VRSBench. This benchmark comprises 29,614 images, with 29,614 human-verified
detailed captions, 52,472 object references, and 123,221 question-answer pairs.
It facilitates the training and evaluation of vision-language models across a
broad spectrum of remote sensing image understanding tasks. We further
evaluated state-of-the-art models on this benchmark for three vision-language
tasks: image captioning, visual grounding, and visual question answering. Our
work aims to significantly contribute to the development of advanced
vision-language models in the field of remote sensing. The data and code can be
accessed at https://github.com/lx709/VRSBench.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined