A Computer Vision-Based Attention Generator using DQN.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021)(2021)

引用 0|浏览45
暂无评分
摘要
A significant obstacle to achieving autonomous driving (AD) and advanced driver-assistance systems (ADAS) functionality in passenger vehicles is high-fidelity perception at a sufficiently low cost of computation and sensors. An area of research that aims to address this challenge takes inspiration from human foveal vision by using attention-based sensing. This work presents an end-to-end computer vision-based Deep Q-Network (DQN) technique that intelligently selects a priority region of an image to place greater attention to achieve better perception performance. This method is evaluated on the Berkeley Deep Drive (BDD) dataset. Results demonstrate that a substantial improvement in perception performance can be attained - compared to a baseline method - at a minimal cost in terms of time and processing.
更多
查看译文
关键词
DQN,significant obstacle,advanced driver-assistance systems,passenger vehicles,high-fidelity perception,sufficiently low cost,sensors,human foveal vision,attention-based sensing,end-to-end computer vision-based Deep Q-Network technique,greater attention,perception performance,Berkeley Deep Drive dataset
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要