Audio-Driven 3D Talking Face for Realistic Holographic Mixed-Reality Telepresence

2023 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom)(2023)

引用 0|浏览6
暂无评分
摘要
Machines' ability to effectively understand human speech based on visual input is crucial for efficient communication. However, distinguishing between the semantics of speech and the facial appearance poses a challenge. This article presents a taxonomy of 3D talking human face methods, categorizing them into GAN-based, NeRF-based, and DLNN-based approaches. The evolution of mixed-reality telepresence now focuses on developing talking 3D faces that synthesize natural human faces in response to text or audio inputs. Audio-video datasets aid in training algorithms across different languages and enabling speech recognition. Addressing audio data noise is vital for robust performance, utilizing techniques like integrating DeepSpeech and adding noise. Latency optimization enhances the user experience, and careful technique selection reduces latency levels. Quantitative and qualitative evaluation methods measure synchronization, face quality, and performance comparison. Talking 3D faces hold potential for advancing mixed-reality communication, necessitating considerations of audio-video datasets, noise reduction, latency, and evaluation techniques.
更多
查看译文
关键词
holographic telepresence,talking face,3D
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要