Learning dual-margin model for visual tracking

Neural Networks(2021)

引用 14|浏览24
暂无评分
摘要
Existing trackers usually exploit robust features or online updating mechanisms to deal with target variations which is a key challenge in visual tracking. However, the features being robust to variations remain little spatial information, and existing online updating methods are prone to overfitting. In this paper, we propose a dual-margin model for robust and accurate visual tracking. The dual-margin model comprises an intra-object margin between different target appearances and an inter-object margin between the target and the background. The proposed method is able to not only distinguish the target from the background but also perceive the target changes, which tracks target appearance changing and facilitates accurate target state estimation. In addition, to exploit rich off-line video data and learn general rules of target appearance variations, we train the dual-margin model on a large off-line video dataset. We perform tracking under a Siamese framework using the constructed appearance set as templates. The proposed method achieves accurate and robust tracking performance on five public datasets while running in real-time. The favorable performance against the state-of-the-art methods demonstrates the effectiveness of the proposed algorithm.
更多
查看译文
关键词
Visual tracking,Siamese network,Dual margin
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要