Learning object-uncertainty policy for visual tracking

Information Sciences(2022)

Cited 16|Views14
No score
Abstract
In research, we found that the purpose of most trackers is to obtain an accurate and robust score map, neglecting how to further examine the confidence of the results. Inspired by the Siamese trackers, which merely use the template from the first frame to locate the target, we propose a novel object-uncertainty policy. Firstly, we propose a dynamic design of the target template set for the tracked target, considering the initial target template and the reliable target template of the subsequent frames concurrently. Secondly, we adopt the multi-layer fusion to represent the target while analyzing the fusion of various feature layers. Moreover, we use a more effective cosine similarity function to calculate the similarity instead of the correlation operation. Finally, we propose a novel voting mechanism in accordance with the similarity between the target tracked in subsequent frames and the target template set. More importantly, this method can be embedded into DCF-like methods to improve tracking performance, which is embedded into the recent DiMP and PrDiMP trackers separately for comparison. Extensive experiments demonstrate that the discriminative ability of the model can be enhanced effectively by using our proposed method, capable of preventing the model from learning the background information. The code and raw tracking results are available at https://github.com/hexdjx/OUPT.
More
Translated text
Key words
Visual tracking,Deep learning,Discriminative correlation filter,Template matching,Object uncertainty policy
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined