Variational inference with Gaussian mixture model and householder flow.

Neural Networks(2019)

引用 16|浏览49
暂无评分
摘要
The variational auto-encoder (VAE) is a powerful and scalable deep generative model. Under the architecture of VAE, the choice of the approximate posterior distribution is one of the crucial issues, and it has a significant impact on tractability and flexibility of the VAE. Generally, latent variables are assumed to be normally distributed with a diagonal covariance matrix, however, it is not flexible enough to match the true complex posterior distribution. We introduce a novel approach to design a flexible and arbitrarily complex approximate posterior distribution. Unlike VAE, firstly, an initial density is constructed by a Gaussian mixture model, and each component has a diagonal covariance matrix. Then this relatively simple distribution is transformed into a more flexible one by applying a sequence of invertible Householder transformations until the desired complexity has been achieved. Additionally, we also give a detailed theoretical and geometric interpretation of Householder transformations. Lastly, due to this change of approximate posterior distribution, the Kullback–Leibler distance between two mixture densities is required to be calculated, but it has no closed form solution. Therefore, we redefine a new variational lower bound by virtue of its upper bound. Compared with other generative models based on similar VAE architecture, our method achieves new state-of-the-art results on benchmark datasets including MNIST, Fashion-MNIST, Omniglot and Histopathology data a more challenging medical images dataset, the experimental results show that our method can improve the flexibility of posterior distribution more effectively.
更多
查看译文
关键词
Variational auto-encoder,Gaussian mixture model,Householder flow,Variational inference
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要