A Music-Driven Deep Generative Adversarial Model for Guzheng Playing Animation

IEEE Transactions on Visualization and Computer Graphics(2023)

引用 3|浏览50
暂无评分
摘要
To date relatively few efforts have been made on the automatic generation of musical instrument playing animations. This problem is challenging due to the intrinsically complex, temporal relationship between music and human motion as well as the lacking of high quality music-playing motion datasets. In this article, we propose a fully automatic, deep learning based framework to synthesize realistic upper body animations based on novel guzheng music input. Specifically, based on a recorded audiovisual motion capture dataset, we delicately design a generative adversarial network (GAN) based approach to capture the temporal relationship between the music and the human motion data. In this process, data augmentation is employed to improve the generalization of our approach to handle a variety of guzheng music inputs. Through extensive objective and subjective experiments, we show that our method can generate visually plausible guzheng-playing animations that are well synchronized with the input guzheng music, and it can significantly outperform the state-of-the-art methods. In addition, through an ablation study, we validate the contributions of the carefully-designed modules in our framework.
更多
查看译文
关键词
Deep learning,generative adversarial networks,motion capture,guzheng animation,music-driven,data augmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要