A Shared Representation for Photorealistic Driving Simulators

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS(2022)

引用 2|浏览23
暂无评分
摘要
A powerful simulator highly decreases the need for real-world tests when training and evaluating autonomous vehicles. Data-driven simulators flourished with the recent advancement of conditional Generative Adversarial Networks (cGANs), providing high-fidelity images. The main challenge is synthesizing photorealistic images while following given constraints. In this work, we propose to improve the quality of generated images by rethinking the discriminator architecture. The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses. We build on successful cGAN models to propose a new semantically-aware discriminator that better guides the generator. We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning. The achieved improvements are generic and simple enough to be applied to any architecture of conditional image synthesis. We demonstrate the strength of our method on the scene, building, and human synthesis tasks across three different datasets. The code is available https://github.com/vita-epfl/SemDisc.
更多
查看译文
关键词
Semantics, Image synthesis, Generators, Training, Task analysis, Image segmentation, Image reconstruction, Image synthesis, generative adversarial networks, autonomous vehicles, shared representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要