CVGAN: Image Generation with Capsule Vector-VAE

IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT I(2022)

引用 0|浏览5
暂无评分
摘要
In unsupervised learning, the extraction of a representational learning space is an open challenge in machine learning. Important contributions in this field are: the Variational Auto-Encoder (VAE), on a continuous latent representation, and the Vector Quantized - VAE (VQ-VAE), on a discrete latent representation. VQ-VAE is a discrete latent variable model that has been demonstrated to learn nontrivial features representations of images in unsupervised learning. It is a viable alternative to the continuous latent variable models, VAE. However, training deep discrete variable models is challenging, due to the inherent non-differentiability of the discretization operation. In this paper, we propose Capsule Vector - VAE(CV-VAE), a new model based on VQ-VAE architecture where the discrete bottleneck represented by the quantization code-book is replaced with a capsules layer. We demonstrate that the capsules can be successfully applied for the clusterization procedure reintroducing the differentiability of the bottleneck in the model. The capsule layer clusters the encoder outputs considering the agreement among capsules. The CV-VAE is trained within Generative Adversarial Paradigm (GAN), CVGAN in short. Our model is shown to perform on par with the original VQGAN, VAE in GAN. CVGAN obtains images with higher quality after few epochs of training. We present results on ImageNet, COCOStuff, and FFHQ datasets, and we compared the obtained images with results with VQGAN . The interpretability of the training process for the latent representation is significantly increased maintaining the structured bottleneck idea. This has practical benefits, for instance, in unsupervised representation learning, where a large number of capsules may lead to the disentanglement of latent representations.
更多
查看译文
关键词
VAE, Capsules, VQ-VAE, VQGAN, GAN, Computer vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要