Towards Generalizable Zero-Shot Manipulation Via Translating Human Interaction Plans

ICRA 2024(2024)

引用 0|浏览16
暂无评分
摘要
We pursue the goal of developing robots that can interact zero-shot with generic unseen objects via a diverse repertoire of manipulation skills and show how passive human videos can serve as a rich source of data for learning such generalist robots. Unlike typical robot learning approaches which directly learn how a robot should act from interaction data, we adopt a factorized approach that can leverage large-scale human videos to learn how a human would accomplish a desired task (a human `plan'), followed by `translating’ this plan to the robot’s embodiment. Specifically, we learn a human `plan predictor’ that, given a current image of a scene and a goal image, predicts the future hand and object configurations. We combine this with a `translation’ module that learns a plan-conditioned robot manipulation policy, and allows following humans plans for generic manipulation tasks in a zero-shot manner with no deployment-time training. Importantly, while the plan predictor can leverage large-scale human videos for learning, the translation module only requires a small amount of in-domain data, and can generalize to tasks not seen during training. We show that our learned system can perform over 16 manipulation skills that generalize to 40 objects, encompassing 100 real-world tasks for table-top manipulation and diverse in-the-wild manipulation. https://homangab.github.io/hopman/
更多
查看译文
关键词
Machine Learning for Robot Control,Learning from Demonstration,Big Data in Robotics and Automation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要