Low-Thrust Orbital Transfer using Dynamics-Agnostic Reinforcement Learning

arxiv(2022)

引用 0|浏览5
暂无评分
摘要
Low-thrust trajectory design and in-flight control remain two of the most challenging topics for new-generation satellite operations. Most of the solutions currently implemented are based on reference trajectories and lead to sub-optimal fuel usage. Other solutions are based on simple guidance laws that need to be updated periodically, increasing the cost of operations. Whereas some optimization strategies leverage Artificial Intelligence methods, all of the approaches studied so far need either previously generated data or a strong a priori knowledge of the satellite dynamics. This study uses model-free Reinforcement Learning to train an agent on a constrained pericenter raising scenario for a low-thrust medium-Earth-orbit satellite. The agent does not have any prior knowledge of the environment dynamics, which makes it unbiased from classical trajectory optimization patterns. The trained agent is then used to design a trajectory and to autonomously control the satellite during the cruise. Simulations show that a dynamics-agnostic agent is able to learn a quasi-optimal guidance law and responds well to uncertainties in the environment dynamics. The results obtained open the door to the usage of Reinforcement Learning on more complex scenarios, multi-satellite problems, or to explore trajectories in environments where a reference solution is not known
更多
查看译文
关键词
reinforcement
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要