Improving Co-speech gesture rule-map generation via wild pose matching with gesture units.

SIGGRAPH Asia Posters(2022)

引用 0|浏览4
暂无评分
摘要
In this poster, we present a method to generate co-speech text-to-gesture mapping for 3D digital humans. We obtained text and 2D pose data from public monologue videos. Gesture units were obtained from motion capture sequences. The method works by matching 2D poses to 3D gesture units. We trained a model via contrastive learning to improve the matching of noisy pose sequences with gesture units. To ensure diverse gesture sequences at runtime, gesture units were clustered using K-Mean clustering. We incorporated 2035 gestures and 210k rules. Our method is highly adaptable and easy to control and use. Demo Video : https://youtu.be/QBtGdGE1Wgk
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要