Synthetic data enables faster annotation and robust segmentation for multi-object grasping in clutter
CoRR(2024)
摘要
Object recognition and object pose estimation in robotic grasping continue to
be significant challenges, since building a labelled dataset can be time
consuming and financially costly in terms of data collection and annotation. In
this work, we propose a synthetic data generation method that minimizes human
intervention and makes downstream image segmentation algorithms more robust by
combining a generated synthetic dataset with a smaller real-world dataset
(hybrid dataset). Annotation experiments show that the proposed synthetic scene
generation can diminish labelling time dramatically. RGB image segmentation is
trained with hybrid dataset and combined with depth information to produce
pixel-to-point correspondence of individual segmented objects. The object to
grasp is then determined by the confidence score of the segmentation algorithm.
Pick-and-place experiments demonstrate that segmentation trained on our hybrid
dataset (98.9
dataset by (6.7
success rate, respectively. Supplementary material is available at
https://sites.google.com/view/synthetic-dataset-generation.
更多查看译文
关键词
synthetic data generation,instance segmentation,pick-and-place operation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要