Generating novel scene compositions from single images and videos

COMPUTER VISION AND IMAGE UNDERSTANDING(2024)

Cited 53|Views92
No score
Abstract
Given a large dataset for training, generative adversarial networks (GANs) can achieve remarkable performance for the image synthesis task. However, training GANs in extremely low data regimes remains a challenge as overfitting often occurs, leading to memorization or training divergence. In this work, we introduce SIV-GAN, an unconditional generative model that can generate new scene compositions from a single training image or a single video clip. We propose a two-branch discriminator architecture with content and layout branches that are designed to judge internal content and scene layout realism separately from each other. This discriminator design enables the synthesis of visually plausible, novel compositions of a scene with varying content and layout while preserving the context of the original sample. Compared to previous single-image GANs, our model generates more diverse images of higher quality while not being restricted to a single image setting. We further introduce a new challenging task of learning from a few frames of a single video. In this training setup the training images are highly similar to each other, which makes it difficult for prior GAN models to achieve a synthesis of both high quality and diversity.
More
Translated text
Key words
Image synthesis,GAN,Low data regime
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined