A Novel Method to Perceive Self-Vehicle State Based on Vehicle Video by Image Similarity Calculation

IEEE Open Journal of Instrumentation and Measurement(2022)

引用 2|浏览3
暂无评分
摘要
Perceiving self-vehicle state based on vehicle information can provide key information for unmanned driving and improve vehicle safety monitoring ability. However, existing studies mainly perceive the vehicle state using out-of-vehicle sensors, positioning systems and in-vehicle sensors, and these methods have their own limitations. In recent years, video image processing has been introduced to transportation research. Despite this and the popularity of vehicle videos, self-vehicle state perception based on vehicle videos captured by the drive recorder remains an unworked area. Therefore, this paper proposed a novel method to perceive self-vehicle state which contains “move” and “stop” by calculating the image similarity of the static region between two adjacent video frames. The static region extraction is based on You Only Look At CoefficenTs (YOLACT) instance segmentation model, which can avoid the interference of surroundings like cars and pedestrians. We acquired actual tram vehicle videos to validate our method which can accurately perceive the state and state transition continuously and real-timely in different complex scenes at any time, even if it stops and restarts within only 3 seconds. The approach gives a new thought and inspiration for studies of videos and illustrates that based on vehicle videos we can not only obtain the vehicle’s environment information but also perceive the self-vehicle state. And the proposed approach can be an alternative for estimating self-vehicle state when traditional methods are not available.
更多
查看译文
关键词
Image similarity,instance segmentation,self-vehicle,state perception,vehicle~video
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要