D2-Net: A Trainable Cnn For Joint Description And Detection Of Local Features

2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)(2019)

引用 563|浏览24
暂无评分
摘要
In this work we address the problem of finding reliable pixel-level correspondences under difficult imaging conditions. We propose an approach where a single convolutional neural network plays a dual role: It is simultaneously a dense feature descriptor and a feature detector By postponing the detection to a later stage, the obtained keypoints are more stable than their traditionalcounterparts based on early detection of low-level structures. We show that this model can be trainedusing pixel correspondences extractedfrom readilyavailable large-scaleSfM reconstructions, without any further annotations. The proposed method obtains state-of-the-artperformance on both the difficult Aachen Day-Night localizationdataset and the InLoc indoor localization benchmark, as well as competitive performance on other benchmarksfor image matching and 3D reconstruction.
更多
查看译文
关键词
joint detection,trainable cnn,features
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要