Improving Disentanglement in Variational Auto-Encoders via Feature Imbalance-Informed Dimension Weighting

Yue Liu, Zhenyao Yu,Zitu Liu, Ziyi Yu, Xinyan Yang, Xingyue Li,Yike Guo,Qun Liu,Guoyin Wang

Knowledge-Based Systems(2024)

引用 0|浏览0
暂无评分
摘要
Using Variational Auto-Encoder (VAE) to learn disentangled representation holds great promise. But there is a feature imbalance in the learning process of VAEs, and the model usually concentrates on the learning of some dimensions at the expense of others. Instead of attempting to rectify it, we exploit feature imbalance to propose a Dimension Weighting method that can boost the disentanglement effect of VAE-based models. In order to conduct disentanglement learning under a fixed size of the latent space, the intrinsic dimension of raw data is estimated by a Dimension Number Estimator and set as the latent space size. Then, leveraging the feature imbalance, a Dimension Importance Evaluator is constructed to separate each dimension in the latent variable into important, unimportant, and general dimensions. By exerting different learning pressures on specific dimensions, we further optimize the variational lower bound of model and retrain it, thus promoting the disentanglement of important dimensions. The experiments on four benchmark datasets show that the Dimension Weighting can further improve the disentangling effect without compromising model performance, and in approximately 80% of cases, the results in disentangling metric evaluation experiments achieve better disentangling scores than the original models. This reveals that not all dimensions of the latent variables are equally influential. By focusing on the crucial dimensions within the latent representation, the model can achieve better performance.
更多
查看译文
关键词
Variational Autoencoder,Latent Space,Disentangled Representation,Dimension Weighting
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要