TMGAN: two-stage multi-domain generative adversarial network for landscape image translation

Liyuan Lin,Shun Zhang, Shulin Ji,Shuxian Zhao, Aolin Wen, Jingpeng Yan,Yuan Zhou,Weibin Zhou

The Visual Computer(2023)

引用 0|浏览3
暂无评分
摘要
Chinese landscape paintings, realistic landscape photographs, and oil paintings each possess unique artistic characteristics and painting features. Image-to-image translation between these three domains is an extremely challenging task. Existing image-to-image translation networks suffer from deficiencies in preserving content or conveying style, posing difficulties in achieving this task. To address this issue, we propose a novel two-stage multi-domain generative adversarial network approach (TMGAN). We add edge maps as additional guidance input and implement content control to better retain content information. In addition, we design the IOST (In/Out module for Style Transfer) module to better assist the style transfer task. By employing a clever design, we decompose the image translation task into two stages: content extraction and style injection. In the content extraction stage, TMGAN extracts high-resolution edge images from content images. In the style injection stage, TMGAN takes the high-resolution edge image as input and injects the specified style for generation. Notably, we accomplish this two-stage task using only a single multi-domain generator network. Extensive qualitative and quantitative experiments conducted against the baseline model validate the exceptional performance of TMGAN. Furthermore, to facilitate further research, we release MLHQ, a high-quality multi-domain landscape dataset.
更多
查看译文
关键词
Generative adversarial networks (GAN),Image-to-image translation,Image generation,Style transfer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要