Learning a Generative Transition Model for Uncertainty-Aware Robotic Manipulation

2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2021)

引用 4|浏览9
暂无评分
摘要
Robot learning of real-world manipulation tasks remains challenging and time consuming, even though actions are often simplified by single-step manipulation primitives. In order to compensate the removed time dependency, we additionally learn an image-to-image transition model that is able to predict a next state including its uncertainty. We apply this approach to bin picking, the task of emptying a bin using grasping as well as pre-grasping manipulation as fast as possible. The transition model is trained with up to 42 000 pairs of real-world images before and after a manipulation action. Our approach enables two important skills: First, for applications with flange-mounted cameras, picks per hours (PPH) can be increased by around 15% by skipping image measurements. Second, we use the model to plan action sequences ahead of time and optimize time-dependent rewards, e.g. to minimize the number of actions required to empty the bin. We evaluate both improvements with real-robot experiments and achieve over 700PPH in the YCB Box and Blocks Test.
更多
查看译文
关键词
uncertainty-aware robotic manipulation,robot learning,real-world manipulation tasks,single-step manipulation primitives,image-to-image transition model,bin picking,pre-grasping manipulation,manipulation action,flange-mounted cameras,image measurements,action sequences,YCB box and blocks test,generative transition model learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要