LOVA3: Learning to Visual Question Answering, Asking and Assessment
CoRR(2024)
Abstract
Question answering, asking, and assessment are three innate human traits
crucial for understanding the world and acquiring knowledge. By enhancing these
capabilities, humans can more effectively utilize data, leading to better
comprehension and learning outcomes. However, current Multimodal Large Language
Models (MLLMs) primarily focus on question answering, often neglecting the full
potential of questioning and assessment skills. In this study, we introduce
LOVA3, an innovative framework named “Learning tO Visual Question Answering,
Asking and Assessment,” designed to equip MLLMs with these additional
capabilities. Our approach involves the creation of two supplementary training
tasks GenQA and EvalQA, aiming at fostering the skills of asking and assessing
questions in the context of images. To develop the questioning ability, we
compile a comprehensive set of multimodal foundational tasks. For assessment,
we introduce a new benchmark called EvalQABench, comprising 64,000 training
samples (split evenly between positive and negative samples) and 5,000 testing
samples. We posit that enhancing MLLMs with the capabilities to answer, ask,
and assess questions will improve their multimodal comprehension and lead to
better performance. We validate our hypothesis by training an MLLM using the
LOVA3 framework and testing it on 10 multimodal benchmarks. The results
demonstrate consistent performance improvements, thereby confirming the
efficacy of our approach.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined