谷歌浏览器插件
订阅小程序
在清言上使用

Kernelized Gradient Descent Method for Learning from Demonstration.

Neurocomputing(2023)

引用 0|浏览10
暂无评分
摘要
Learning from demonstration (LfD) has been widely studied as a convenient method for robot learning. In the LfD paradigm, the robot is required to learn relevant motion patterns from human demonstrations and apply them to various situations. Many advancements have been achieved in recent studies, however, there is a lack of studies on solutions for robots to learn the variability from demonstrations and adapt the reproduction to unseen scenarios in high-dimensional or long-trajectory cases. In this paper, a novel nonparametric kernelized gradient descent (KGD) method of LfD is proposed, which capitalizes on gradient descent and kernel-based approaches and produces a model with fewer open parameters than methods that employ basis functions. The proposed KGD method can accurately represent the complex demonstrations and utilize the variability of the demonstrations to adapt the trajectories smoothly to different unseen situations described by newly desired start-, via- and end-points precisely with high computational efficiency. Experiments were conducted to evaluate the performance of the proposed KGD method and compare it with commonly used probabilistic kernelized movement primitive (KMP) and mean-prior Gaussian process regression (MP-GPR) methods. The results indicated that the proposed KGD method performs better than KMP and MP-GPR in terms of the precision of the desired points, reproduction smoothness, and computation time. On account of the high efficiency and outstanding reproduction, KGD has the potential to be beneficial for better human–robot collaboration, facilitating assembly lines or improving robot learning. An important future challenge will be to extend the KGD towards considering the nonlinear constraints to learn more complex tasks.
更多
查看译文
关键词
Learning from demonstration,Kernelization,Robot skill learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要