UGAN: Untraceable GAN for Multi-Domain Face Translation

arxiv(2019)

引用 11|浏览131
暂无评分
摘要
The multi-domain image-to-image translation is received increasing attention in the computer vision community. However, the translated images often retain the characteristics of the source domain. In this paper, we propose a novel Untraceable GAN (UGAN) to tackle the phenomenon of source retaining. Specifically, the discriminator of UGAN contains a novel source classifier to tell which domain an image is translated from, with the purpose to determine whether the translated image still retains the characteristics of the source domain. After this adversarial training converges, the translator is able to synthesize the target-only characteristics and also erase the source-only characteristics. In this way, the source domain of the synthesized image becomes untraceable. We perform extensive experiments, and the results have demonstrated that the proposed UGAN can produce superior results over state-of-the-art StarGAN on three face editing tasks, including face aging, makeup, and expression editing. The source code will be made publicly available.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要