Real-time heads-up display detection in video

Advanced Video and Signal Based Surveillance(2014)

引用 4|浏览24
暂无评分
摘要
Video from surveillance cameras, aerial sensors, video games, and other sources may occasionally contain text, heads-up displays (HUDs), lens debris, or other artifacts superimposed on top of some scene. In standard video processing pipelines, the early detection and filtering of these image-plane aligned obstructions can be helpful for improving the accuracy of later operations, such as video stabilization, tracking, or object recognition. This paper presents one such technique to automatically accomplish this, which first extracts various pixel-level features which jointly take into account local spatiotemporal variations around each pixel. Features extracted from multiple frames are then utilized by a novel classification system to determine if any obstructions are present and, if possible, to categorize them into known types. Experimental results show promising performance on a variety of different categories of HUD, in addition to other types of on-screen display.
更多
查看译文
关键词
feature extraction,filtering theory,head-up displays,image classification,object detection,video surveillance,HUDs,aerial sensors,classification system,image-plane aligned obstruction detection,image-plane aligned obstruction filtering,lens debris,local spatiotemporal variations,multiple frames,object recognition,object tracking,on-screen display,pixel-level feature extraction,real-time head-up display detection,video games,video processing pipelines,video stabilization,video surveillance cameras
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要