Multi-Task Domain Adaptation for Deep Learning of Instance Grasping from Simulation

2018 IEEE International Conference on Robotics and Automation (ICRA)(2018)

引用 126|浏览203
暂无评分
摘要
Learning-based approaches to robotic manipulation are limited by the scalability of data collection and accessibility of labels. In this paper, we present a multi-task domain adaptation framework for instance grasping in cluttered scenes by utilizing simulated robot experiments. Our neural network takes monocular RGB images and the instance segmentation mask of a specified target object as inputs, and predicts the probability of successfully grasping the specified object for each candidate motor command. The proposed transfer learning framework trains a model for instance grasping in simulation and uses a domain-adversarial loss to transfer the trained model to real robots using indiscriminate grasping data, which is available both in simulation and the real world. We evaluate our model in real-world robot experiments, comparing it with alternative model architectures as well as an indiscriminate grasping baseline.
更多
查看译文
关键词
multitask domain adaptation,deep learning,successful grasping probability,transfer learning framework,domain-adversarial loss,candidate motor command,specified target object,instance segmentation mask,monocular RGB images,neural network,cluttered scenes,instance grasping,robotic manipulation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要