Decision Making Under Uncertainty for Urban Driving

semanticscholar(2018)

引用 0|浏览0
暂无评分
摘要
In this work we examine the problem of Motion Planning for Autonomous Driving AD. We must plan in a stochastic environment with several sources of uncertainties: imperfect sensors, occlusions, unpredictable behaviour of external agents. We focus on sensor uncertainties and model the AD problem as a Partially Observable Markov Decision Process POMDP with a discrete action space and continuous state and observation spaces. We propose improvements to a traditional online algorithm used for such large POMDPs: the POMCP algorithm. The goal is to make our motion planner a safer autonomous driver. Our proposed solutions include: a ”safe” discretization of the observation space, importance sampling techniques and an offline preprocessing of the state space to limit the planner’s behaviour to ”safe” actions. We implemented and tested our methods on two AD environments: a Custom Anti-Collision Test Environment CACTE and the Urban Driving Environment UDE from Stanford’s Intelligent Systems Laboratory SISL. I. PROBLEM STATEMENT We consider the problem of determining a sequence of optimal actions for a Urban Driving Motion Planner. The autonomous driving pipeline consists of a two modules with distinct roles: a sensors fusion and localization to produce a probabilistic model of the environment’s dynamics, decision making module in charge of defining a driving policy. The decision making module can be further subdivided into three modules: 1) Route planner: to define a long term driving goal. 2) Behavioral planner: to define a list of short term objectives, typically going from A to B with a mixture of efficiency (time taken) comfort (minimizing jerk) and safety (avoiding collisions and keeping safety distances) objectives. 3) Motion planner: to complete the motion tasks submitted by the behavioral planner. Fig. 1: Autonomous Driving Pipeline We focus on the Motion planner, which can be thought as a module taking sequential decisions, in the form of a discrete set of actions (acceleration or deceleration) that will then be converted to a sequence of continuous actions by a command and control module (usually a simple Model Progressive Controller MPC). The motion planner has to deal with several sources of uncertainties that are not directly observable: sensors uncertainties, occlusions and drivers intentions. Linking driving decisions to a proper handling of those sources of uncertainties is of paramount importance. In this paper we study how POMDP models can be applied to an AD motion planner, and emphasize on the safety objective of the Motion Planner: our proposed algorithm should succeed in their objective of going from position A to B with the strict requirement of avoiding any collision with other agents When dealing with huge states spaces, as it is the case in our urban driving setting, online methods are usually preferred over offline methods and our work starts from there. “A Survey of Motion Planning and Control Techniques for Selfdriving Urban Vehicles” [9], reveals that most motion planning techniques somehow boil down to an online graph search. In the paper, we study how uncertainty can be properly modeled and handled during this online graph search process to improve safety, and also consider offline methods to guide this graph search and improve safety.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要