Human-Like Highway Trajectory Modeling Based On Inverse Reinforcement Learning

2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC)(2019)

引用 8|浏览68
暂无评分
摘要
Autonomous driving is one of the current cutting edge technologies. For autonomous cars, their driving actions and trajectories should not only achieve autonomy and safety, but also obey human drivers' behavior patterns, when sharing the roads with other human drivers on the highway. Traditional methods, though robust and interpretable, demands much human labor in engineering the complex mapping from current driving situation to vehicle's future control. For newly developed deep-learning methods, though they can automatically learn such complex mapping from data and demands fewer humans' engineering, they mostly act like black-box, and are less interpretable. We proposed a new combined method based on inverse reinforcement learning to harness the advantages of both. Experimental validations on lane-change prediction and human-like trajectory planning show that the proposed method approximates the state-of-the-art performance in modeling human trajectories, and is both interpretable and data-driven.
更多
查看译文
关键词
human-like highway trajectory planning,safety,autonomous cars,autonomous driving,highway trajectory modeling,lane-change prediction,inverse reinforcement learning,complex mapping,deep-learning methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要