Foreground-Background Disentanglement based on Image and Feature Co-Learning for 3D-Aware Generative Models.

2023 IEEE International Conference on Visual Communications and Image Processing (VCIP)(2023)

引用 0|浏览0
暂无评分
摘要
Recently, studies on generative models using 3D information are active. GIRAFFE, one of the latest 3D-aware generative models, shows better feature disentanglement than existing generative models because it generates an image through volume rendering of independently formed 3D neural feature fields. However, GIRAFFE still suffers from an issue where foreground and background disentanglement is not smooth. In order to accomplish better disentanglement performance than GIRAFFE, we propose co-adversarial learning of the generative model at both image- and feature-levels. As a result of rich simulation experiments, the proposed generative model can produce photo-realistic images with only fewer parameters than existing 3D-aware generative models, along with excellent foreground-background disentanglement performance.
更多
查看译文
关键词
3D-aware generative model,foreground-background disentanglement
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要