Anticipating Many Futures: Online Human Motion Prediction and Generation for Human-Robot Interaction.
ICRA(2018)
摘要
Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. The bottleneck of most methods is the lack of an accurate model of natural human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motion patterns. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold.
更多查看译文
关键词
motion patterns,kinematic cues,natural human motion,human-robot interaction,online human motion prediction,target prediction,RGB depth images,skeletal data,conditional variational autoencoder,time 300.0 ms to 500.0 ms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络