DeepDance: Music-to-Dance Motion Choreography With Adversarial Learning

IEEE Transactions on Multimedia(2021)

引用 61|浏览145
暂无评分
摘要
The creation of improvised dancing choreographies is an important research field of cross-modal analysis. A key point of this task is how to effectively create and correlate music and dance with a probabilistic one-to-many mapping, which is essential to create realistic dances of various genres. To address this issue, we propose a GAN-based cross-modal association framework, DeepDance, which correlates two different modalities (dance motion and music) together, aiming at creating the desired dance sequence in terms of the input music. Its generator is to predictively produce the dance movements best-fit to current music piece by learning from examples. In another hand, its discriminator acts as an external evaluation from the audience and judges the whole performance. The generated dance movements and the corresponding input music are considered to be well-matched if the discriminator cannot distinguish the generated movements from the training samples according to the estimated probability. By adding motion consistency constraints in our loss function, the proposed framework is able to create long realistic dance sequences. To alleviate the problem of expensive and inefficient data collection, we propose an effective approach to create a large-scale dataset, YouTube-Dance3D, from open data source. Extensive experiments on currently available music-dance datasets and our YouTube-Dance3D dataset demonstrate that our approach effectively captures the correlation between music and dance and can be used to choreograph appropriate dance sequences.
更多
查看译文
关键词
Music-driven dance choreography,adversarial learning,cross-modal association
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要