Learning Deep Neural Network Controllers for Dynamical Systems with Safety Guarantees: Invited Paper

2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)(2019)

引用 23|浏览30
暂无评分
摘要
There is recent interest in using deep neural networks (DNNs) for controlling autonomous cyber-physical systems (CPSs). One challenge with this approach is that many autonomous CPS applications are safety-critical, and is not clear if DNNs can proffer safe system behaviors. To address this problem, we present an approach to modify existing (deep) reinforcement learning algorithms to guide the training of those controllers so that the overall system is safe. We present a novel verification-in-the-loop training algorithm that uses the formalism of barrier certificates to synthesize DNN-controllers that are safe by design. We demonstrate a proof-of-concept evaluation of our technique on multiple CPS examples.
更多
查看译文
关键词
dynamical systems,safety guarantees,autonomous CPS applications,safety-critical,safe system behaviors,DNN-controllers,learning deep neural network controllers,autonomous cyber-physical systems control,verification-in-the-loop training algorithm,reinforcement learning algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要