Shoe-print image retrieval with multi-part weighted CNN

IEEE ACCESS(2019)

Cited 21|Views27
No score
Abstract
Identifying shoe-print impressions in the scene of crime (SoC) from database images is a challenging problem in forensic science due to the complicated impressing surface, the partial absence of on-site impressions, and the huge domain gap between the query and the gallery images. The existing approaches pay much attention to feature extraction while ignoring its distinctive characteristics. In this paper, we propose a novel multi-part weighted convolutional neural network (MP-CNN) for shoe-print image retrieval. Specifically, the proposed CNN model processes images in three steps: 1) dividing the input images vertically into two parts and extracting sub-features by a parameter-shared network individually; 2) calculating the importance weight matrix of the sub-features based on the informative pixels they contained and concatenating them as the final feature, and; 3) using the triplet loss function to measure the similarity between the query and the gallery images. In addition to the proposed network, we adopt an effective strategy to enhance the quality of the images and to reduce the domain gap using the U-Net structure. The experimental evaluations demonstrate that our proposed method significantly outperforms other fine-grained cross-domain methods on SPID dataset and obtains comparative results with the state-of-the-art shoe-print retrieval methods on FID300 dataset.
More
Translated text
Key words
Cross-domain,image retrieval,shoe-print,scene of crime
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined