Audio-Driven Lips and Expression on 3D Human Face

ADVANCES IN COMPUTER GRAPHICS, CGI 2023, PT II(2024)

引用 0|浏览1
暂无评分
摘要
Extensive research has delved into audio-driven 3D facial animation with numerous attempts to achieve human-like performance. However, creating truly realistic and expressive 3D facial animations remains a challenging task, as existing methods often struggle to capture the subtle nuances of anthropomorphic expressions. We propose the Audio-Driven Lips and Expression (ADLE) method, specifically designed to generate highly expressive and lifelike conversations between individuals, complete with essential social signals like laughter and excitement, solely based on audio cues. The foundation of our approach lies in the revolutionary audio-expression-consistency strategy, which effectively disentangles person-specific lip movements from dependent facial expressions. As a result, our ADLE robustly learns lip movements and generic expression parameters on a 3D human face from an audio sequence, which represents a powerful multimodal fusion approach capable of generating accurate lip movements paired with vivid facial expressions on a 3D face, all in real-time. Experiments validates that our ADLE outperforms other state-of-the-art works in this field, making it a highly promising approach for a wide range of applications.
更多
查看译文
关键词
Face Expression,Lips Movement,Fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要