Chrome Extension
WeChat Mini Program
Use on ChatGLM

DAVA: Disentangling Adversarial Variational Autoencoder

ICLR 2023(2023)

Cited 3|Views34
No score
Abstract
The use of well-disentangled representations poses many advantages for downstream tasks, e.g. increasing sample efficiency, or enabling interpretability. Their quality is, however, determined to a large extent by the choice of dataset-specific hyperparameters, most notably the regularization strength. To address the issue, we introduce DAVA, a novel training procedure for variational auto-encoders that alleviates the issue of hyperparameter selection at the cost of a comparatively small overhead. We compare DAVA against models with optimal choice of hyperparameters. Without any hyperparameter tuning, DAVA is competitive across a diverse range of commonly used datasets. Further, even under an adequate set of hyperparameters, the success of the disentanglement process remains heavily influenced by randomness in network initialization. We therefore present the new unsupervised PIPE disentanglement metric, capable of evaluating representation quality. We demonstrate the PIPE metrics ability to positively predict performance of downstream models in abstract reasoning. We also exhaustively examine correlations with existing supervised and unsupervised metrics.
More
Translated text
Key words
Disentanglement learning,varational auto-encoder,curriculum learning,generative adversarial networks
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined