Chrome Extension
WeChat Mini Program
Use on ChatGLM

On-Policy and Pixel-Level Grasping Across the Gap Between Simulation and Reality

IEEE Transactions on Industrial Electronics(2024)

Cited 0|Views7
No score
Abstract
Grasp detection in cluttered scenes is a very challenging task for robots. Generating synthetic grasping data is a popular way to train and test grasp methods, as is Dex-Net; yet, these methods sample training grasps on 3-D synthetic object models, but evaluate at images or point clouds with different sample distributions, which reduces performance due to covariate shift and sparse grasp labels. To solve existing problems, we propose a novel on-policy grasp detection method for parallel grippers, which can train and test on the approximate distribution with dense pixel-level grasp labels generated on RGB-D images. An Orthographic-Depth Grasp Generation (ODG-Generation) method is proposed to generate an orthographic depth image through a new imaging model of projecting points in orthographic; then this method generates multiple candidate grasps for each pixel and obtains robust positive grasps through flatness detection, force-closure metric and collision detection. Then, a comprehensive Pixel-Level Grasp Pose Dataset (PLGP-Dataset) is constructed, which is the first pixel-level grasp dataset, with the on-policy distribution. Lastly, we build a grasp detection network with a novel data augmentation process for imbalance training. Experiments show that our on-policy method can partially overcome the gap between simulation and reality, and achieves the best performance.
More
Translated text
Key words
Grasping,Training,Solid modeling,Grippers,Testing,6-DOF,Point cloud compression,On-policy grasp,orthographic depth image,pixel-level grasp,grasp detection
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined