Interpretable AI Agent Through Nonlinear Decision Trees for Lane Change Problem.

SSCI(2021)

引用 3|浏览4
暂无评分
摘要
The recent years have witnessed a surge in application of deep neural networks (DNNs) and reinforcement learning (RL) methods to various autonomous control systems and game playing problems. While they are capable of learning from real-world data and produce adequate actions to various state conditions, their internal complexity does not allow an easy way to provide an explanation for their actions. In this paper, we generate state-action pair data from a trained DNN/RL system and employ a previously proposed nonlinear decision tree (NLDT) framework to decipher hidden simplistic rule sets that interpret the working of DNN/RL systems. The complexity of the rule sets are controllable by the user. In essence, the inherent bi-level optimization procedure that finds the NLDTs is capable of reducing the complexities of the state-action logic to a minimalist and intrepretable level. Demonstrating the working principle of the NLDT method to a revised mountain car control problem, this paper applies the methodology to the lane changing problem involving six critical cars in front and rear in left, middle, and right lanes of a pilot car. NLDTs are derived to have simplistic relationships of 12 decision variables involving relative distances and velocities of the six critical cars. The derived analytical decision rules are then simplified further by using a symbolic analysis tool to provide English-like interpretation of the lane change problem. This study makes a scratch to the issue of interpretability of modern machine learning based tools and it now deserves further attention and applications to make the overall approach more integrated and effective.
更多
查看译文
关键词
Decision trees,bi-level optimization,machine learning,reinforcement learning,autonomous vehicles
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要