Optimal Control via Linearizable Deep Learning

2023 AMERICAN CONTROL CONFERENCE, ACC(2023)

引用 0|浏览14
暂无评分
摘要
Deep learning models are frequently used to capture relations between inputs and outputs and to predict operation costs in dynamical systems. Computing optimal control policies based on the resulting regression models, however, is a challenging task because of the nonlinearity and nonconvexity of deep learning architectures. To address this issue, we propose in this paper a linearizable approach to design optimal control policies based on deep learning models for handling both continuous and discrete action spaces. When using piecewise linear activation functions, one can construct an equivalent representation of recurrent neural networks in terms of a set of mixed-integer linear constraints. That in turn means that the optimal control problem reduces to a mixed-integer linear program (MILP), which can then be solved using off-the-shelf MILP optimization solvers. Numerical experiments on standard reinforcement learning benchmarks attest to the good performance of the proposed approach.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要