Reducing Task Discrepancy of Text Encoders for Zero-Shot Composed Image Retrieval
arxiv(2024)
摘要
Composed Image Retrieval (CIR) aims to retrieve a target image based on a
reference image and conditioning text, enabling controllable searches. Due to
the expensive dataset construction cost for CIR triplets, a zero-shot (ZS) CIR
setting has been actively studied to eliminate the need for human-collected
triplet datasets. The mainstream of ZS-CIR employs an efficient projection
module that projects a CLIP image embedding to the CLIP text token embedding
space, while fixing the CLIP encoders. Using the projected image embedding,
these methods generate image-text composed features by using the pre-trained
text encoder. However, their CLIP image and text encoders suffer from the task
discrepancy between the pre-training task (text ↔ image) and
the target CIR task (image + text ↔ image). Conceptually, we
need expensive triplet samples to reduce the discrepancy, but we use cheap text
triplets instead and update the text encoder. To that end, we introduce the
Reducing Task Discrepancy of text encoders for Composed Image Retrieval (RTD),
a plug-and-play training scheme for the text encoder that enhances its
capability using a novel target-anchored text contrastive learning. We also
propose two additional techniques to improve the proposed learning scheme: a
hard negatives-based refined batch sampling strategy and a sophisticated
concatenation scheme. Integrating RTD into the state-of-the-art
projection-based ZS-CIR methods significantly improves performance across
various datasets and backbones, demonstrating its efficiency and
generalizability.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要