Chrome Extension
WeChat Mini Program
Use on ChatGLM

Self-Distillation Hashing for Efficient Hamming Space Retrieval

ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)

Cited 0|Views6
No score
Abstract
Deep hashing-based approaches have become the optimal solutions for large-scale image retrieval task due to their high computational efficiency and low storage burden. Some methods leverage a large teacher network to improve the retrieval performance of the small student network through knowledge distillation, which incurs high computational and time costs. In this paper, we propose Self-Distillation Hashing (SeDH), which improves the image retrieval performance without introducing a complex teacher model and significantly reduces the overall computation costs. Specifically, we generate the soft targets via ensembling the logits of other similar images among the mini-batch. The ensembled soft targets can model the relations between different image samples, which can act as additional supervision for classification. Besides, to learn more compact features and accurate inter-sample similarities, we propose a similarity-preserving loss on the learned hashing features, which aligns the softened similarity distribution with the pairwise soft similarity. Extensive experiments demonstrate that our approach can yield state-of-the-art performance on deep supervised hashing retrieval.
More
Translated text
Key words
Self-distillation,Deep supervised hashing,Image retrieval,Pairwise similarity
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined