Augmenting Imitation Experience via Equivariant Representations

IEEE International Conference on Robotics and Automation(2022)

引用 1|浏览10
暂无评分
摘要
The robustness of visual navigation policies trained through imitation often hinges on the augmentation of the training image-action pairs. Traditionally, this has been done by collecting data from multiple cameras, by using standard data augmentations from computer vision, such as adding random noise to each image, or by synthesizing training images. In this paper we show that there is another practical alternative for data augmentation for visual navigation based on extrapolating viewpoint embeddings and actions nearby the ones observed in the training data. Our method makes use of the geometry of the visual navigation problem in 2D and 3D and relies on policies that are functions of equivariant embeddings, as opposed to images. Given an image-action pair from a training navigation dataset, our neural network model predicts the latent representations of images at nearby viewpoints, using the equivariance property, and augments the dataset. We then train a policy on the augmented dataset. Our simulation results indicate that policies trained in this way exhibit reduced cross-track error, and require fewer interventions compared to policies trained using standard augmentation methods. We also show similar results in autonomous visual navigation by a real ground robot along a path of over 500m.
更多
查看译文
关键词
equivariant representations,visual navigation policies,multiple cameras,standard data augmentations,computer vision,random noise,visual navigation problem,equivariant embeddings,neural network model,latent representations,equivariance property,augmented dataset,standard augmentation methods,autonomous visual navigation,imitation experience augmentation,image-action pair training,training image synthesis,viewpoint embedding extrapolation,reduced cross-track error
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要