Chrome Extension
WeChat Mini Program
Use on ChatGLM

Cross-Modal Attention Preservation with Self-Contrastive Learning for Composed Query-Based Image RetrievalJust Accepted

ACM Transactions on Multimedia Computing, Communications, and Applications(2023)

Cited 0|Views21
No score
Abstract
In this paper, we study the challenging cross-modal image retrieval task, Composed Query-Based Image Retrieval (CQBIR) , in which the query is not a single text query but a composed query, i.e. , a reference image, and a modification text. Compared with the conventional cross-modal image-text retrieval task, the CQBIR is more challenging as it requires properly preserving and modifying the specific image region according to the multi-level semantic information learned from the multi-modal query. Most recent works focus on extracting preserved and modified information and compositing them into a unified representation. However, we observe that the preserved regions learned by the existing methods contain redundant modified information, inevitably degrading the overall retrieval performance. To this end, we propose a novel method termed C ross- M odal A ttention P reservation (CMAP). Specifically, we first leverage the cross-level interaction to fully account for multi-granular semantic information, which aims to supplement the high-level semantics for effective image retrieval. Furthermore, different from conventional contrastive learning, our method introduces self-contrastive learning into learning preserved information, to prevent the model from confusing the attention for the preserved part with the modified part. Extensive experiments on three widely used CQBIR datasets, i.e. , FashionIQ, Shoes, and Fashion200k demonstrate our proposed CMAP method significantly outperforms the current state-of-the-art methods on all the datasets. The anonymous implementation code of our CMAP method is available at https://github.com/CFM-MSG/Code_CMAP.
More
Translated text
Key words
Composed Query-based Image Retrieval,Cross-Modal Retrieval,Cross-Level Interaction,Preserved and Modified Attentions
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined