Stable Reinforcement Learning With Autoencoders For Tactile And Visual Data

2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2016)

引用 174|浏览104
暂无评分
摘要
For many tasks, tactile or visual feedback is helpful or even crucial. However, designing controllers that take such high-dimensional feedback into account is non-trivial. Therefore, robots should be able to learn tactile skills through trial and error by using reinforcement learning algorithms. The input domain for such tasks, however, might include strongly correlated or non-relevant dimensions, making it hard to specify a suitable metric on such domains. Auto-encoders specialize in finding compact representations, where defining such a metric is likely to be easier. Therefore, we propose a reinforcement learning algorithm that can learn non-linear policies in continuous state spaces, which leverages representations learned using auto-encoders. We first evaluate this method on a simulated toy task with visual input. Then, we validate our approach on a real-robot tactile stabilization task.
更多
查看译文
关键词
reinforcement learning algorithm,autoencoders,tactile data,visual data,tactile feedback,visual feedback,controller design,robot learning,nonlinear policy learning,robot tactile stabilization task
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要