Trial and Error

IEEE ROBOTICS & AUTOMATION MAGAZINE(2016)

引用 0|浏览3
暂无评分
摘要
Since biological systems have the ability to efficiently reuse previous experiences to change their behavioral strategies to avoid enemies or find food, the number of required samples from real environments to improve behavioral policy is greatly reduced. Even for real robotic systems, it is desirable to use only a limited number of samples from real environments due to the limited durability of real systems to reduce the required time to improve control performance. In this article, we used previous experiences as environmental local models so that the movement policy of a humanoid robot can be efficiently improved with a limited number of samples from its real environment. We applied our proposed learning method to a real humanoid robot and successfully achieve two challenging control tasks. We applied our proposed learning approach to acquire a policy for a cart-pole swing-up task in a real-virtual hybrid task environment, where the robot waves a PlayStation (PS) Move motion controller to move a cart-pole in a virtual simulator. Furthermore, we applied our proposed method to a challenging basketball-shooting task in a real environment.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要