RankMI: A Mutual Information Maximizing Ranking Loss

CVPR(2020)

引用 43|浏览118
暂无评分
摘要
We introduce an information-theoretic loss function, RankMI, and an associated training algorithm for deep representation learning for image retrieval. Our proposed framework consists of alternating updates to a network that estimates the divergence between distance distributions of matching and non-matching pairs of learned embeddings, and an embedding network that maximizes this estimate via sampled negatives. In addition, under this information-theoretic lens we draw connections between RankMI and commonly-used ranking losses, e.g., triplet loss. We extensively evaluate RankMI on several standard image retrieval datasets, namely, CUB-200-2011, CARS-196, and Stanford Online Products. Our method achieves competitive results or significant improvements over previous reported results on all datasets.
更多
查看译文
关键词
ranking losses maximization,stanford online products,deep representation learning,standard image retrieval datasets,information-theoretic lens,embedding network,learned embeddings,distance distributions,training algorithm,information-theoretic loss function,mutual information,RankMI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要