Learning a Generalizable Trajectory Sampling Distribution for Model Predictive Control.

IEEE Trans. Robotics(2024)

引用 0|浏览1
暂无评分
摘要
We propose a sample-based Model Predictive Control (MPC) method for collision-free navigation that uses a normalizing flow as a sampling distribution, conditioned on the start, goal, environment and cost parameters. This representation allows us to learn a distribution that accounts for both the dynamics of the robot and complex obstacle geometries. We propose a way to incorporate this sampling distribution into two sampling-based MPC methods, MPPI and iCEM. However, when deploying these methods, the robot may encounter an out-of-distribution (OOD) environment. To generalize our method to OOD environments we also present an approach that performs projection on the representation of the environment. This projection changes the environment representation to be more in-distribution while also optimizing trajectory quality in the true environment. Our simulation results on a 2D double-integrator, a 12DoF quadrotor and a 7DoF kinematic manipulator suggest that using a learned sampling distribution with projection outperforms MPC baselines on both in-distribution and OOD environments over different cost functions, including OOD environments generated from real-world data.
更多
查看译文
关键词
Motion and Path Planning,Nonholonomic Motion Planning,Deep Generative Models,Deep Learning in Robotics and Automation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要