Inverse Reinforcement Learning with Attention-based Feature Extraction from Video Demonstrations.

2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)(2023)

引用 0|浏览2
暂无评分
摘要
The sparse reward problem makes the robot difficult to get effective feedback during learning skills with reinforcement learning, which will seriously affect the training efficiency. In this paper, we propose an inverse reinforcement learning (IRL) method to understand expert intent from video demonstrations and further provide dense rewards for robots to learn the task. We learn representations related to task progress by matching different videos in the embedding space based on their temporal alignment and incorporating attention mech-anisms to focus on the task-relevant features. Based on this, we design an efficient and robust dense reward function that enhances the efficiency of the reinforcement learning process. We validated our method in robotic manipulation tasks, and the experimental results showed that our approach exhibits faster convergence speed and stability and have the generalization ability in skill learning with video demonstrations from unseen embodiment.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要