Image2Sentence based Asymmetrical Zero-shot Composed Image Retrieval
ICLR 2024(2024)
Abstract
The task of composed image retrieval (CIR) aims to retrieve images based on
the query image and the text describing the users' intent. Existing methods
have made great progress with the advanced large vision-language (VL) model in
CIR task, however, they generally suffer from two main issues: lack of labeled
triplets for model training and difficulty of deployment on resource-restricted
environments when deploying the large vision-language model. To tackle the
above problems, we propose Image2Sentence based Asymmetric zero-shot composed
image retrieval (ISA), which takes advantage of the VL model and only relies on
unlabeled images for composition learning. In the framework, we propose a new
adaptive token learner that maps an image to a sentence in the word embedding
space of VL model. The sentence adaptively captures discriminative visual
information and is further integrated with the text modifier. An asymmetric
structure is devised for flexible deployment, in which the lightweight model is
adopted for the query side while the large VL model is deployed on the gallery
side. The global contrastive distillation and the local alignment
regularization are adopted for the alignment between the light model and the VL
model for CIR task. Our experiments demonstrate that the proposed ISA could
better cope with the real retrieval scenarios and further improve retrieval
accuracy and efficiency.
MoreTranslated text
Key words
zero-shot,composed image retrieval,asymmetrical
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined