Enhancing Vision-Language Pre-training with Rich Supervisions
CoRR(2024)
Abstract
We propose Strongly Supervised pre-training with ScreenShots (S4) - a novel
pre-training paradigm for Vision-Language Models using data from large-scale
web screenshot rendering. Using web screenshots unlocks a treasure trove of
visual and textual cues that are not present in using image-text pairs. In S4,
we leverage the inherent tree-structured hierarchy of HTML elements and the
spatial localization to carefully design 10 pre-training tasks with large scale
annotated data. These tasks resemble downstream tasks across different domains
and the annotations are cheap to obtain. We demonstrate that, compared to
current screenshot pre-training objectives, our innovative pre-training method
significantly enhances performance of image-to-text model in nine varied and
popular downstream tasks - up to 76.1
least 1
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined