Learning With Dual Demonstration Domains: Random Domain-Adaptive Meta-Learning

IEEE ROBOTICS AND AUTOMATION LETTERS(2022)

引用 4|浏览8
暂无评分
摘要
Although robots have been widely applied in various fields, allowing a robot to perform a wide range of tasks like humans is a significant challenge. One promising method is meta-learning, which enables robots to learn from demonstrations with the concept of "learning to learn." However, most meta-learning methods only focus on teaching robots to learn from a single demonstration domain, i.e., demonstrations recording videos of a human or robot performing tasks (human or robot demonstrations). Given that humans can acquire and merge knowledge from various related domains, this letter proposes a novel yet efficient Random Domain-Adaptive Meta-Learning (RDAML) framework to teach the robot to learn from multiple demonstration domains (e.g., human demonstrations + robot demonstrations) with different random sampling parameters. Once training is complete, the trained model can adapt to new environments given a corresponding visual demonstration. Extensive experimental results show that the model trained by our proposed RDAML algorithm obtains better generalization capability. We have demonstrated the effectiveness of our RDAML on real-world placing experiments using a UR5 robot arm, which significantly outperforms current state-of-the-art methods using either human demonstrations or robot demonstrations to teach the robot during testing.
更多
查看译文
关键词
Learning from demonstration, imitation learning, domain adaptation, meta-learning, deep learning methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要