Boosting Visual-Language Models by Exploiting Hard Samples
arxiv(2023)
Abstract
Contrastive Language-Image Pre-training (CLIP) has become the standard for
learning cross-modal representations between images and text. Efforts to
improve its capabilities typically demand the collection of additional data and
retraining with new loss functions. While effective, the added requirements
limit their practical use due to the increased resource and time investments
needed. In this work, we present HELIP, a cost-effective strategy tailored to
enhance the performance of existing CLIP models without the need for training a
model from scratch or collecting additional data. Our method allows for
effortless integration with existing models' training pipelines, providing an
instant boost by training them with selected challenging text-image pairs from
their original training datasets. HELIP treats each text-image pair as a single
point in the joint vision-language space, identifying those in close proximity
as hard pairs. By incorporating the challenging data, pre-trained CLIP models
are refined using both the traditional contrastive loss and the newly
introduced hard negative margin loss, ensuring the challenging data is fully
utilized. On comprehensive benchmarks, HELIP consistently boosts existing
models to achieve leading performance. In particular, it improves the zero-shot
classification accuracy on ImageNet for SLIP models pre-trained on CC3M, CC12M
and YFCC15M datasets. The improvements are 3.05
respectively, achieved within two epochs of training. In addition, across
fine-grained classification datasets, HELIP improves the zero-shot performance
of pre-trained CLIP and SLIP by an average of 8.4
probe performance by an average of 9.5
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined