GraspFusionNet: a two-stage multi-parameter grasp detection network based on RGB–XYZ fusion in dense clutter

MACHINE VISION AND APPLICATIONS(2020)

Cited 5|Views27
No score
Abstract
Robotic grasping of diverse range of novel objects is a great challenge in dense clutter, which is also critical to many applications. However, current methods are vulnerable to perception uncertainty for dense stacked objects, resulting in limited accuracy of multi-parameter grasp prediction. In this paper, we propose a two-stage grasp detection pipeline including sampling and predicting stages. The first sampling stage applies fully convolutional network to generate grasp proposal regions, which contain potential graspable objects. Among grasp proposal region, the second prediction stage predicts complete grasp parameters based on fusion of RGB–XYZ heightmaps, which are converted from color and depth images. To perceive essential structures of stable grasping, 2D CNN and 3D CNN are used to learn color and geometric features to predict multi-parameter grasp, respectively. The direct mapping from heightmaps to grasp parameters is realized based on a multi-task loss. Experiments on a self-built dataset and an open dataset are conducted to analyze the network performance. The results indicate that the proposed two-stage method achieves the best performance among other grasp detection algorithms. Robotic experiments demonstrate generalization ability and robustness in dense clutter for novel objects, and the proposed method achieves average grasp success rate of 82.4%, which is also better than other state-of-the-art methods. Our self-built dataset and robotic grasping video are available at https://github.com/liuwenhai/toteGrasping.git .
More
Translated text
Key words
Deep learning,Multi-parameter grasp,Grasp detection,RGB-XYZ fusion,Dense clutter
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined