TRINS: Towards Multimodal Language Models That Can Read
CVPR 2024(2024)
Abstract
Large multimodal language models have shown remarkable proficiency in
understanding and editing images. However, a majority of these visually-tuned
models struggle to comprehend the textual content embedded in images, primarily
due to the limitation of training data. In this work, we introduce TRINS: a
Text-Rich image INStruction dataset, with the objective of enhancing the
reading ability of the multimodal large language model. TRINS is built upon
LAION using hybrid data annotation strategies that include machine-assisted and
human-assisted annotation processes. It contains 39,153 text-rich images,
captions, and 102,437 questions. Specifically, we show that the number of words
per annotation in TRINS is significantly longer than that of related datasets,
providing new challenges. Furthermore, we introduce a simple and effective
architecture, called a Language-vision Reading Assistant (LaRA), which is good
at understanding textual content within images. LaRA outperforms existing
state-of-the-art multimodal large language models on the TRINS dataset, as well
as other classical benchmarks. Lastly, we conducted a comprehensive evaluation
with TRINS on various text-rich image understanding and generation tasks,
demonstrating its effectiveness.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined