Improving Data Efficiency Of Self-Supervised Learning For Robotic Grasping

2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)(2019)

Cited 39|Views3
No score
Abstract
Given the task of learning robotic grasping solely based on a depth camera input and gripper force feedback, we derive a learning algorithm from an applied point of view to significantly reduce the amount of required training data. Major improvements in time and data efficiency are achieved by: Firstly, we exploit the geometric consistency between the undistorted depth images and the task space. Using a relative small, fully-convolutional neural network, we predict grasp and gripper parameters with great advantages in training as well as inference performance. Secondly, motivated by the small random grasp success rate of around 3%, the grasp space was explored in a systematic manner. The final system was learned with 23 000 grasp attempts in around 60 h, improving current solutions by an order of magnitude. For typical bin picking scenarios, we measured a grasp success rate of (96.6 +/- 1.0)%. Further experiments showed that the system is able to generalize and transfer knowledge to novel objects and environments.
More
Translated text
Key words
fully-convolutional neural network,gripper parameters,inference performance,random grasp success rate,grasp space,systematic manner,typical bin picking scenarios,self-supervised learning,robotic grasping,depth camera input,gripper force feedback,learning algorithm,geometric consistency,undistorted depth images,task space,grasp attempts,data efficiency,training data
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined