Embedding spatial context information into inverted filefor large-scale image retrieval.

Proceedings of the 20th ACM international conference on Multimedia(2012)

引用 54|浏览38
暂无评分
摘要
One most popular approach for large-scale content-based image retrieval is based on the Bag-of-Visual-Words model. Since the spatial context among local features is very important for visual content identification, many approaches index local features' geometric clues, such as location, scale and orientation for post-verification. To obtain consistent accuracy performance, the amount of top ranked images that post-verification approach needs to process is proportional to the image database size. When the database is very large, the verified images will be too many to be processed in real-time response. To address this issue, in this paper, we explore two approaches to embed spatial context information into the inverted file. The first one is to build a spatial relationship dictionary embedded with spatial context among local features, which we call one-one spatial relationship method. The second one is to generate a spatial context binary signature for each feature, which we call one-multiple spatial relationship method. Then we build an inverted file with spatial information between local features. The geometric verification is implicitly achieved while traversing the inverted file. Experimental results on benchmark Holidays dataset demonstrate the efficiency of the proposed algorithm.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要