Towards Robust Decision-Making for Autonomous Driving on Highway

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY(2023)

引用 10|浏览13
暂无评分
摘要
Reinforcement learning (RL) methods are commonly regarded as effective solutions for designing intelligent driving policies. Nonetheless, even if the RL policy is converged after training, it is notoriously difficult to ensure safety. In particular, RL policy is susceptible to insecurity in the presence of long-tail or unseen traffic scenarios, i.e., out-of-distribution test data. Therefore, the design of the RL-based decision-making method must account for this shift in distribution. This paper proposes a robust decision-making framework for autonomous driving on the highway to improve driving safety. First, a Deep Deterministic PolicyGradient (DDPG)-basedRLpolicy that directly maps observations to actions is constructed. Subsequently, the model uncertainty of the DDPG policy is evaluated at runtime to quantify the policy's reliability and identify unseen scenarios. In addition, a complementary principle-based policy is developed using the intelligent driver model (IDM) and the model for minimizing overall braking induced by lane changes (MOBIL). It will take over the DDPG policy when encountering unseen scenarios to guarantee a lower-bound performance of the decision-making system. Finally, the proposed method is implemented on an embedded system, i.e., NVIDIA Jetson AGX Xavier, and out-of-training distribution challenging cases are considered in the experiment, i.e., observation with sensor noise, traffic density increasing significantly, objects falling from the front vehicle, and road construction causing temporal changes in road structure. Results indicate that the proposed framework outperforms state-of-the-art benchmarks. Additionally, the code is provided.
更多
查看译文
关键词
Autonomous vehicles,decision-making,reinforcement learning policy,rule-based policy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要