Visually Grounded Language Learning for Robot Navigation

1st International Workshop on Multimodal Understanding and Learning for Embodied Applications(2019)

引用 3|浏览322
暂无评分
摘要
We present an end-to-end deep learning model for robot navigation from raw visual pixel input and natural text instructions. The proposed model is an LSTM-based sequence-to-sequence neural network architecture with attention, which is trained on instruction-perception data samples collected in a synthetic environment. We conduct experiments on the SAIL dataset which we reconstruct in 3D so as to generate the 2D images associated with the data. Our experiments show that the performance of our model is on a par with state-of-the-art, despite the fact that it learns navigational language with end-to-end training from raw visual data.
更多
查看译文
关键词
instruction following, natural language processing, robot navigation, visual grounding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要