Tracking as Online Decision-Making: Learning a Policy from Streaming Videos with Reinforcement Learning

2017 IEEE International Conference on Computer Vision (ICCV)(2017)

引用 134|浏览140
暂无评分
摘要
We formulate tracking as an online decision-making process, where a tracking agent must follow an object despite ambiguous image frames and a limited computational bud- get. Crucially, the agent must decide where to look in the upcoming frames, when to reinitialize because it believes the target has been lost, and when to update its appearance model for the tracked object. Such decisions are typically made heuristically. Instead, we propose to learn an optimal decision-making policy by formulating tracking as a partially observable decision-making process (POMDP). We learn policies with deep reinforcement learning algorithms that need supervision (a reward signal) only when the track has gone awry. We demonstrate that sparse rewards al- low us to quickly train on massive datasets, several orders of magnitude more than past work. Interestingly, by treat- ing the data source of Internet videos as unlimited streams, we both learn and evaluate our trackers in a single, unified computational stream.
更多
查看译文
关键词
online decision-making process,tracking agent,ambiguous image frames,appearance model,tracked object,optimal decision-making policy,partially observable decision-making process,deep reinforcement learning algorithms,Internet videos,unified computational stream,video streaming
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要