Stylistic Locomotion Modeling and Synthesis using Variational Generative Models.

MIG '19: Motion, Interaction and Games Newcastle upon Tyne United Kingdom October, 2019(2019)

引用 10|浏览7
暂无评分
摘要
We propose a novel approach to create generative models for distinctive styles of locomotion for humanoid characters. Our approach only requires a single or a few style examples and a neutral motion database. We are inspired by the observation that human styles can be easily distinguished from a few examples. However, learning a generative model for natural human motions which can display huge amounts of variations and randomness would require a lot of training data. Furthermore, it would require considerable efforts to create such a large motion database for each style. One solution for that is motion style transfer, which provides the possibility of converting the content of the motion from one style to the other. Typically style transfer focuses on transferring the content motion to target style explicitly. We propose a variational generative model to combine the large variation in neutral motion database and style information from a limited number of examples. We formulate the style motion modeling as a conditional distribution learning problem and style transfer is implicitly applied during the model learning process. A conditional variational autoencoder (CVAE) is applied to learn the distribution and stylistic examples are used as constraints. We demonstrate that our approach can generate any number of natural-looking, various human motions with a similar style to the target.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要