How to Evaluate Proving Grounds for Self-Driving? A Quantitative Approach

IEEE Transactions on Intelligent Transportation Systems(2021)

引用 15|浏览13
暂无评分
摘要
Proving ground has been a critical component in testing and validation for Connected and Automated Vehicles (CAV). Although quite a few world-class testing facilities have been under construction over the years, the evaluation of proving grounds themselves as testing approaches has rarely been studied. In this paper, we present the first attempt to systematically evaluate CAV proving grounds and contribute to a generative sample-based approach to assessing the representation of traffic scenarios in proving grounds. Leveraging typical use cases extracted from naturalistic driving events, we establish a strong link between proving ground testing results of CAVs and their anticipated public street performance. We present benchmark results of our approach on three world-class CAV testing facilities: Mcity, Almono (Uber ATG), and Kcity. We successfully show the overall evaluation of these proving grounds in terms of their capability to accommodate real-world traffic scenarios. We believe that when the effectiveness of a testing ground itself is validated, the testing results would grant more confidence for CAV public deployment.
更多
查看译文
关键词
Self-driving,testing,proving ground,design,unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要