Imitating Tool-Based Garment Folding From a Single Visual Observation Using Hand-Object Graph Dynamics

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS(2024)

引用 0|浏览4
暂无评分
摘要
Garment folding is a ubiquitous domestic task that is difficult to automate due to the highly deformable nature of fabrics. In this article, we propose a novel method of learning from demonstrations that enables robots to autonomously manipulate an assistive tool to fold garments. In contrast to traditional methods (that rely on low-level pixel features), our proposed solution uses a dense visual descriptor to encode the demonstration into a high-level hand-object graph (HoG) that allows to efficiently represent the interactions between the manipulated tool and robots. With that, we leverage graph neural network to autonomously learn the forward dynamics model from HoGs, then, given only a single demonstration, the imitation policy is optimized with a model predictive controller to accomplish the folding task. To validate the proposed approach, we conducted a detailed experimental study on a robotic platform instrumented with vision sensors and a custom-made end-effector that interacts with the folding board.
更多
查看译文
关键词
Robots,Task analysis,Clothing,Visualization,Predictive models,Manipulator dynamics,Trajectory,Cloth folding,graph dynamics model,hand-object graph (HoG),imitation learning (IL),tool manipulation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要